Graphics Cards Guides - PremiumBuilds https://premiumbuilds.com/category/graphics-cards/ Fri, 31 Dec 2021 16:21:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.4 https://premiumbuilds.com/wp-content/uploads/2021/03/cropped-premiumbuilds-favicon-new-2-32x32.png Graphics Cards Guides - PremiumBuilds https://premiumbuilds.com/category/graphics-cards/ 32 32 160969867 What is the Optimal Monitor Resolution for RTX 3060 Builds? https://premiumbuilds.com/monitors/best-resolution-for-rtx-3060/ https://premiumbuilds.com/monitors/best-resolution-for-rtx-3060/#respond Fri, 31 Dec 2021 16:21:39 +0000 https://premiumbuilds.com/?p=808849 The RTX 3060 sits as a mid-range GPU champion. Its MSRP of $329 places it perfectly among the rest of NVIDIA’s stock, although getting your hands on any graphics card at the recommended price is still a challenge. This card is not meant to push 4K gaming or anything close. Instead, it serves the 1080p… Read More »What is the Optimal Monitor Resolution for RTX 3060 Builds?

The post What is the Optimal Monitor Resolution for RTX 3060 Builds? appeared first on PremiumBuilds.

]]>
best resolution for rtx 3060 build

The RTX 3060 sits as a mid-range GPU champion. Its MSRP of $329 places it perfectly among the rest of NVIDIA’s stock, although getting your hands on any graphics card at the recommended price is still a challenge. This card is not meant to push 4K gaming or anything close. Instead, it serves the 1080p crowd perfectly, and can respectfully manage a few games at 1440p. Perhaps more than any other card from the 3000 series, the rest of your rig will have to pull its weight for the best performance.

With all of that in mind, the question of the best resolution to play it with this card comes up. Note that everything we just said is about the RTX 3060, not the RTX 3060 Ti. If you manage to upgrade to that even more elusive card, your resolution options open up significantly. While this article focuses mostly on the original 3060, we’ll reference the 3060 Ti a few times. And, to put it out there; if you have the choice between the 3060 and the 3060 Ti at MSRP, choose the upgrade. The extra $70 will be worthwhile.

Above all else, the 3060 is a 1080p GPU. You should always look at what games you play and their requirements, as well as the rest of your rig. However, for those looking to future-proof their build or game in high resolutions, you’ll have a better time waiting for an upgrade. Let’s see why and how to optimize this card.

The Champion of 1080p Gaming with RTX

For those looking to play the most modern games at good frame rates on the RTX 3060, there’s only one resolution option. 1080p became the standard monitor resolution years ago, and for good reason. It’s sharp enough to be usable at almost all screen sizes, games can hit ludicrous frame rates on it, and 1080p monitors are relatively cheap.

Compared to other cards in both the 2000 and 3000 series, the RTX 3060 is an exceptional deal for playing at 1080p. This is especially true for those interested in utilizing RTX technology for ray tracing and special effects. At only $329, it’s one of the cheapest ways to access this tech and get great performance. Across benchmarks, its performance is almost equivalent to the 2060 Super while being slightly cheaper.

Techspot’s analysis of modern games such as Assassin’s Creed: Valhalla, Watch Dogs: Legion, and Death Stranding all place this GPU around the middle of the pack at maximum settings. While this might seem like a lackluster result, it’s actually great performance for the price. Most GPUs ranking higher than the RTX 3060 are much more expensive and made for gaming at higher resolutions. Notably, the 3060 Ti breaks this mold. It’s only $70 at MSRP but consistently beats more expensive cards, even outperforming the RTX 2080.

RTX 3060 performance suffers moving up to the 1440p range. In fact, at maximum settings, it rarely even hits 60FPS on the same titles. While many gamers would be fine using the 3060 for 1440p gaming, it will not be useful for monitors with a refresh rate higher than 60Hz.


What Refresh Rate to Aim for While Gaming At 1080p

As we mentioned earlier, the exact performance you’ll get with this card depends heavily on the rest of your setup. However, assuming you don’t have a major bottleneck for your CPU, there are some general guides we can provide. Most people gaming at 1080p on the RTX 3060 will comfortably hit 120 or 144 frames per second using medium settings. If you bump that up to ultra-settings or care more about RTX, you’ll probably be closer to about 60 frames per second.

If you’re looking to upgrade your monitor, a 120 Hz monitor is a safe bet. With that said, it may not be worth upgrading specifically because of this card. Due to how long 1080p monitors have been the standard, many of them at 60 Hz refresh rates, your current setup may be more than enough.

However, for those of you who need an upgrade, consider grabbing a monitor with a higher refresh rate. We recommend splurging for the upgrade for three reasons:

  1. You will likely play many games that can reach those higher frame rates
  2. The monitor will be better for future upgrades to your build
  3. The difference in price between a 60 Hz and 120 or 144 Hz 1080p monitor is minimal

Choosing A Higher Resolution

While the RTX 3060 performs best at a 1080p resolution, gaming at 1440p is certainly a possibility. If you’re considering leaving behind higher frame rates for graphical fidelity, you’re not making an incorrect choice. We recommend upgrading to a 3060 Ti instead if you can find one, however. It simply performs far better while gaming at 1440p than the basic 3060 can.

Consider that RTX features and ray-tracing capabilities will be limited while playing at 1440p on this card. With these effects on at a higher resolution, it’s easy for the 3060 to fall well below 40 FPS, which many people consider unplayable. Think about your personal tolerance and preferences before deciding.


Closing Thoughts

The RTX 3060 is currently one of the easiest graphics cards on the market to place. When found at MSRP, it is a respectable deal that serves all gamers who aren’t interested in upgrading to 1440p resolution just yet. Its graphical prowess will get you through all modern games and likely the next few years with no issues, even on maximum settings.

It’s not the choice for those with enormous budgets or who want to push the very edge of gaming, but it’s not supposed to be. However, there is enough zip in this to carry a few games through nice 1440p performances. As always, consider what games you play and their requirements to make your final decision. If it were us, we’d choose a 1080p resolution with a high frame rate for this card every time.


Relevant Guides

Want to read more about the RTX 3060 and its capabilities? We’ve written plenty about NVIDIA’s 3000 series and have the answers to all your questions. Check out these articles to get a head start:

The post What is the Optimal Monitor Resolution for RTX 3060 Builds? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/monitors/best-resolution-for-rtx-3060/feed/ 0 808849
How Much VRAM Do I Need for Gaming? https://premiumbuilds.com/graphics-cards/how-much-vram-do-i-need-for-gaming/ https://premiumbuilds.com/graphics-cards/how-much-vram-do-i-need-for-gaming/#respond Fri, 27 Aug 2021 16:48:00 +0000 https://premiumbuilds.com/?p=808611 Everyone interested in PC gaming has heard about how important a good graphics card is. As the avenue that lets you experience the newest titles, you want to be sure that your card is up to the task. VRAM is a major factor in that decision. It’s an important statistic in determining the power of… Read More »How Much VRAM Do I Need for Gaming?

The post How Much VRAM Do I Need for Gaming? appeared first on PremiumBuilds.

]]>
how much vram for gaming at 1080p 1440p 4k

Everyone interested in PC gaming has heard about how important a good graphics card is. As the avenue that lets you experience the newest titles, you want to be sure that your card is up to the task. VRAM is a major factor in that decision. It’s an important statistic in determining the power of your graphics card, but how much do you actually need?

Modern cards range from 4GB all the way to over 12GB of VRAM. While more VRAM is never a bad thing, prices can quickly get out of hand, so setting a target range is a good idea. Plus, VRAM isn’t the only thing that affects game performance. Let’s explore what VRAM is and the biggest things that affect how much you need before diving into specific numbers.

What Is VRAM?

VRAM is an acronym for Video Random Access Memory. It’s built and optimized for the graphics card, making it exceptional at certain tasks like texture loading and frame buffering. You can think of it as a specialized version of the RAM that already exists in your computer, although there are a few key differences.

To start, VRAM cannot be upgraded or swapped out later. It is soldered directly into the graphics card, so you will have to replace the whole unit to see better performance. This makes future-proofing during your initial purchase a good idea, as game requirements see constant updates. The more demanding the game, the more VRAM you need, with some exceptions based on other metrics.

Like normal RAM, VRAM serves to speed up common tasks or parts of computing. The VRAM on your graphics card is used to temporarily store various graphics-based tasks, making your graphics card far faster. The more VRAM you have, the more can be stored in the quick-access state. If your graphics card must retrieve something that cannot be stored in the VRAM, it has to go to your SSD or HDD and grab it from there. This causes significant slowdowns.

Finally, know that VRAM is not the only important part of a graphics card. Cards with 6GB of VRAM can sometimes outperform cards with 8GB due to better optimization and more power elsewhere. Be sure to research other parts of the graphics card and look for how many teraflops of power it has before making a final purchase decision.


Resolution Greatly Affects VRAM Usage

The first and biggest factor in how much VRAM you need is resolution. Higher resolutions need significantly more VRAM to achieve the same performance. This is because the number of pixels on the screen increases exponentially between resolutions; a 4K resolution has twice as many pixels to capture as a 1080p resolution. For the purposes of VRAM, this drastically increases the memory each frame takes on the card. If you have ever lagged in a game and lowered your resolution before, you probably noticed immediate performance gains.

Every frame while gaming has to be processed and pushed out at the set resolution. Most monitors have resolutions of 1080p, 1440p, or 4k. If you are interested in gaming at these high resolutions, you need enough VRAM to support the same number of frames. 8GB of VRAM quickly becomes the minimum for AAA gaming in 4k while stepping up to 12GB for higher FPS.


The Games You Play Decide How Much VRAM You Need

This is a well-known rule of graphics; the more advanced the game’s graphics, the more powerful the graphics card needs to be. While there are other factors that determine the power of a GPU, VRAM is a decent starting point.

How much VRAM a game requires depends on how well optimized it is, the style of the game, and its graphical prowess. As expected, running Red Dead Redemption 2 is quite a bit more taxing than Team Fortress 2. In general terms, more modern games require more VRAM to play. While you can adjust in-game settings to improve performance – we’ll talk about that in the next section – all games have a base VRAM floor that must be met. Otherwise, you will always lag and see poor performance.

While new games can require a daunting amount of VRAM, this may be a blessing in disguise. Even the cheapest card on the market can likely run games through 2014 flawlessly. Plus, there are games that rely more heavily on the CPU or don’t need much at all, like Minecraft or Risk of Rain. If you mainly play indie games or older titles, you can get a great graphics card for cheap. You can (and should) check the system requirements for your favorite games to get an idea of what you need. Be sure to add an extra 2-4GB of VRAM for high-resolution gaming.


In-Game Settings Affect Performance

Finally, in-game settings can be tweaked to your heart’s content. For those of you looking to save on VRAM, you can turn down most settings and see major performance jumps. Those of us lucky to have a top-of-the-line graphics card can do the opposite, launching VRAM usage through the roof in the name of graphical fidelity.

Some modern games, like Warzone, show you how much VRAM the game uses vs. what is available. This is a nice feature to have while tweaking settings but is not necessary. While some settings have a stronger effect on VRAM usage than others, you’ll want to lower most settings for the best performance.

One of the key settings to keep an eye on is anti-aliasing. This is the setting that smooths out the edges of 3D models in the game, providing a nice, polished look. Unfortunately, it is also one of the most intensive settings due to how it works. In the simplest terms, anti-aliasing multiplies the images around certain objects and copies them to smooth the edges. For reasons similar to the resolution effect above, this can quickly get out of hand. Try lowering or even turning off anti-aliasing to lower VRAM usage.

Other areas like texture quality and particle effects are also notoriously draining. For most intensive games, you can find guides online to optimize the settings. There is almost always a good balance to be struck between performance and looks to help you out.


How Much VRAM Do You Need?

We’ve touched upon the numbers throughout the article, but here is a definitive list of what you need. While we’ve listed these by resolution, you should remember that other factors also play a role. You probably don’t need 8GB of VRAM to play Minesweeper, even in 4K. Use your best judgement and look up other systems and the FPS they get in your favorite games.

  • 1080p – 2GB-4GB of VRAM
  • 1440p – 4GB-8GB of VRAM
  • 4K – 8GB+ of VRAM

A final quick note: these are recommendations for modern games. If you find yourself returning to classics like Portal 2 or Dark Souls, you will not need as much power under the hood.


Summary

RTX 3080 vs RX 6800 XT Performance Analysis Benchmarks

The amount of VRAM you need depends on several factors. While shopping around for a new card or checking if your system is up to snuff, always assume that more is better. You can still be reasonable, however – if you are comfortably gaming in 1080p, you do not need to splurge on an RTX 3080.

Refer to our list above to find the resolution that will best suit your gaming needs. Keep in mind that resolution, game, and in-game settings all have dramatic effects on how much VRAM you’ll need. If your budget allows for it, go higher.

Future-proofing your expensive hardware like the GPU and CPU is a good idea, especially given how quickly game requirements can advance. Given their price in the current market, we recommend starting with at least 4GB of VRAM and going from there. 8GB of VRAM is ideal for new builders. This will let you play modern games at max settings – a performance that will likely carry into the next generation of titles too.


Relevant Guides

Interested in exploring the newest graphics cards and seeing how they compare? We’ve got you covered with plenty of guides here:

The post How Much VRAM Do I Need for Gaming? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/graphics-cards/how-much-vram-do-i-need-for-gaming/feed/ 0 808611
Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost? https://premiumbuilds.com/reviews/nvidia-rtx-3080-ti-review/ https://premiumbuilds.com/reviews/nvidia-rtx-3080-ti-review/#respond Mon, 02 Aug 2021 15:29:18 +0000 https://premiumbuilds.com/?p=808586 In June, Nvidia released several new GPUs including the RTX 3080 Ti. This high-end GPU uses the same GA102 core as the RTX 3090 and RTX 3080 that bracket it, as well as 12GB of GDDR6X VRAM. This GPU offers an absolutely top draw experience but can it possibly justify the price tag? In this… Read More »Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost?

The post Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost? appeared first on PremiumBuilds.

]]>


In June, Nvidia released several new GPUs including the RTX 3080 Ti. This high-end GPU uses the same GA102 core as the RTX 3090 and RTX 3080 that bracket it, as well as 12GB of GDDR6X VRAM. This GPU offers an absolutely top draw experience but can it possibly justify the price tag?

In this review, we’ve pitted it against the RTX 3080 and the AMD RX 6800 XT, as well as the top tier last-generation Nvidia card the RTX 2080 Ti to find out what it offers.

1. Specification Comparison

GPURTX 3080 TiRTX 3080RTX 3090RX 6800 XTRTX 2080 Ti
GPU CoreGA102-225-A1 8nmGA102-200-KD-A1 8nmGA102-300-A1 8nmNavi 21  8nmTU102-300A-K1-A1 12nm
Shader units1024087041049646084352
RTX Cores80688272 (AMD 1st Gen)68 (Nvidia 1st Gen)
Tensor Cores320272328/544 (1st Gen)
VRAM12GB GDDR6X10GB GDDR6X24GB GDDR6X16GB GDDR611GB GDDR6
VRAM Bus Speed384 bit320 bit384 bit256 bit352 bit
Pixel Rate186.5 GPixel/s164.2 GPixel/s189.8 GPixel/s288.0 GPixel/s136.0 GPixel/s
Texture Rate532.8 GTexel/s465.1 GTexel/s556.0 GTexel/s648.0 GTexel/s420.2 GTexel/s
TDP350W320W350W300W250W
Price (MSRP/Actual)$1,199/$1500$699/$1200+$1,499/$2000+$649/$1000$999/$600 (used)

VRAM

Looking at the key specification we can see how closely the RTX 3080 Ti matches the RTX 3090. The principal difference is the halving of VRAM capacity, from 24GB to 12GB. This is still ample for gaming, but reduces the cost of parts significantly with the Micron/Nvidia exclusive GDDR6X costing around 100$ per 10GB, and also the power draw with VRAM power consumption topping 100W in the RTX 3090. It uses the same 384-bit bus providing very high bandwidth access to VRAM and this is the real reason for the slight increase to 12GB over the 3080’s 10GB: The wider bus requires 12GB of VRAM or multiples of that.

Cores & Shader Units

The core itself loses just 256 of over 10,000 shader units vs the RTX 3090, and 8 Tensor cores and two RTX cores. This is a near-identical specification to the RTX 3090 which indicates that it should perform very similarly too. 

Of the other important specifications, we can’t compare ‘Shader units’ across the AMD card or the last generation RTX 2080ti as they’re different architectures, and the same goes for Ray Tracing cores. The RX 6800XT posts impressive theoretical fill rates, but from testing we know it matches the RTX 3080 incredibly closely in rasterised gaming performance.

Pricing

Finally, we come to pricing, and this is really where the controversy lies. The RTX 3090 was criticised for being too expensive at $1500, and not worth it for gaming where the 24GB VRAM went unused. Then of course everything went crazy, and the 3090 became a veritable money-printing machine thanks to its Ethereum mining capability.

$1500 doesn’t sound so bad when a card can earn $10 a day, but then of course prices rose to account for that with cards at well over $2,000 at retail and the second-hand market.

The RTX 3080 Ti launched at a nominal $1199 price point, but retail immediately saw that climb past $1500 except for the very few founder edition cards where retailers were bound to honour Nvidia’s pricing. So what we’re looking at here is a card that is retailing at around $1500 at this time. And, they’re all ‘Low hash Rate cards’ so you can’t mine as efficiently during downtime to recoup some of the cost. 

You can make a persuasive argument that no ‘gaming’ GPU is worth that, but that’s something we’ll consider after looking at the benchmark results. 

2. Benchmarks

We’ve divided the benchmarks up game by game, and run all resolutions so you can focus in on what’s most relevant to you. We’ll highlight at this point that none of the cards in this test should be run at 1080p, it’s simply a waste of their potential, but the numbers are there anyway.

Test Bench

We’ve maintained the same test bench of a Ryzen 5800X, B550 Motherboard, and 16GB of 3600MHZ CL16 RAM with infinity fabric and memory clock set 1:1. We ran a Fractal Designs Ion Platinum 860W Power supply to ensure adequate power. This is a high performance system with the 5800X the equal of any CPU available right now in terms of gaming performance. It’s optimised with good RAM speed, but not overclocked beyond PBO being enabled. 

 We want this test bench to represent the kind of system this GPU would actually be used with. In keeping with this, we run games at representative ‘high to ultra’ settings to show the kind of performance you can actually expect in-game. Simply cranking all settings to ultra often misrepresents a GPUs actual performance, through overburdening either it or the CPU with settings that haven’t been optimised and trash performance for little visual gain.

Synthetic benchmarks

First, looking at synthetic benchmarks through 3DMark testing, Firestrike is the Direct X11 test and renders in 1080p. The 6800XT excels in this, the RTX 3080 Ti still can’t beat its score, giving away nearly 5000 points. However, it does have a clear margin of performance to the RTX 3080 6,000 points behind it, and then the RTX 2080Ti is over 10,000 points behind the 3080 Ti overall.



Time Spy Shows the 3080 Ti leapfrog the RX 6800 XT in this DirectX12 based 1440p graphics test that’s more representative of current games. It’s 1,500 points ahead of the AMD card, and 2,500 ahead of the RTX 3080. There are over 5000 points lead above the RTX 2080 Ti.

Finally, to test Ray Tracing performance, we can take a quick look at the Scores in Port Royal. Here the RTX 3080 Ti uses it’s 12 Ray tracing core advantage to Romp home 2,000 points above the RTX 3080, and 4,000 points ahead of both the RTX 2080 Ti and RX 6800 XT. It’s the clear winner in this test. 

Gaming Benchmarks

Call of Duty: Warzone

Warzone is first up. This is tested by running a five-minute battle royale against Bots, and logging metrics. The recent update knocked performance back about 15% across the board, and I’ve had to omit the RX 6800 XT as we no longer have it available for testing – it performed near identically to the RTX 3080 so please take that as a proxy.



Warzone proves itself a stern test of both CPU and GPU, and can’t generate very high FPS as some other shooters can. The 3080 Ti only marginally outperforms the 3080 at 1080p, Scoring 221 FPS average to 213FPS. At 1440p again there’s only a 10 FPS difference, 180 FPS to 170FPS which isn’t in keeping with the on-paper specification difference. At Ultrawide 1440p we see a little wider gap, proportionally, with a 16 FPS difference. You can see the RTX 2080Ti is 30FPS behind throughout. And finally, at 4K we see the RTX 3080 Ti post just over 100FPS at 110, whilst the 3080 make 96FPS. Overall in Warzone, we don’t see a performance gap commensurate with either specification or Pricing of these GPUs.

Rainbow 6 Siege

Rainbow 6 Siege is much faster running across the board, and again re-testing means we omit the RX 6800 XT here. At 1080p, 1440p, 1440p ultrawide and 4K you can see the 3080 Ti posts about a 10% uplift versus the RTX 3080. There’s no yawning gap in performance here just a few more frames.



Doom Eternal uses Vulkan Drivers and is well optimised, and here we can compare the RX 6800XT which performs well at lower resolutions. The RTX 3080 Ti has a more commanding lead over the RTX 3080 in this title, particularly at higher resolutions. At 1440p it holds 337 FPS vs 273 for the RTX 3080, and at Ultrawide it’s 266FPS over the RTX 3080’s 238 FPS. At 4K the RT 3080 Ti manages 186 FPS in our testing, with the RTX 3080 and RX 6800XT tied at 160FPS. 

Red Dead Redemption 2

Moving on to the AAA titles in our test suite and looking at Red Dead Redemption 2, the RTX 3080 Ti again tops the charts but not by a huge amount. Just 10 FPS separates it from the RTX 3080 across the board, from 1080p to 4k. 


Shadow of the Tomb Raider

Shadow of the Tomb Raider has always shown good scaling with hardware and isn’t particularly CPU limited for the bulk of the benchmark run – although it is in the final village scene to provide a good overview of system performance. Here it’s no different, with a good 20% advantage over the RTX 3080, 40FPS faster at 1080p, twenty FPS at 1440p and 1440p ultrawide, and 17 FPS better at 4k. Those are fairly impressive steps up in isolation. 


Flight Simulator 2020

This demanding but gorgeous simulator delivers a cautionary tale. Our custom benchmark is designed to fully tax CPU and GPU with a low level 3 minute AI-controlled flight over Manhattan. We’ve shown results for both average performance and 1% lows here to better illustrate the results: This GPU is NOT the performance saviour for Flight Sim 2020. You can see that this game is CPU limited with all of these GPUs at 1080p, 1440p, and 1440p ultrawide. Only at 4K does the RTX 3080 Ti pull ahead, but even then it’s matched by the RTX 3080 and we’re STILL CPU limited to around 48FPS average. The long story cut short here is that despite reputations Flight sim actually isn’t GPU dependent: You need a top-flight CPU to make this game run well.

3. Ray Tracing: A Subjective assessment

Looking at Ray tracing performance, This is more of a subjective assessment of the experience. That’s for a few reasons: First is the hand-and-glove nature of RTX and DLSS with the upsampling technology giving a massive boost to performance but also allows you to tweak settings to your preference of fidelity against frame rates. Secondly because of the fast-evolving nature of RTX implementations in games. Games like Control and Metro Exodus remastered really do show this feature off well, with naturalistic lighting and well-judged effects. We’ve been playing Metro Exodus remastered at 1440p ultrawide and RTX on, but no DLSS and play is fast, fluid and utterly gorgeous – but you’d hope that to be the case with a range-topping GPU. The long and short of it is that this GPU offers one of the best gaming experiences currently available utilising these technologies from Nvidia, and that’s as you’d expect. 

Taking a quick look at temperatures and power draw: Running default settings and Logging metrics through a Time Spy run to give a load representative of Gaming, we see the 3080 Ti Draw around 390W to 400 Watts under load. Temperatures on this FTW3 card remain acceptable, with the core reaching around 75 Celcius, and The GDDR6X Memory junction temperatures at 86 Celcius. This is pretty good, under heavy load we can expect temperatures to reach 95C and GDDR6X will run as hot as 105C under continuous heavy loads or when airflow is restricted. Many owners resort to modifying their cards with thermal pads to transmit heat away from the VRAM and into the backplate. Overall, the power draw of this card in particular will demand a very capable power supply to run it, and you may want to investigate under volting it to keep power draw and temperatures lower as well. Particularly when comparing it to the RTX 3080, which draws around 340W, this card consumes around 20% more power for 10% or so more performance – not a great result comparatively speaking. A good quality 750W power supply should be considered a minimum for this Card, we did test it with a high-quality 650W Power supply, the Antec Earthwatts Gold, and it forced system shutdowns on a few occasions.

Conclusions: Can you justify the cost of the RTX 3080 Ti?

What we inevitably come to is the question of value: Is this GPU worth the $1,500 they’re currently retailing for? The answer is an unequivocal ‘No’. It’s simply impossible to justify the price of this GPU on performance grounds. You lose virtually nothing by opting for an RTX 3080 instead and lowering just a few settings for an equivalent experience. It has all the same features, capabilities, and it uses much less power. The trouble is of course RTX 3080’s aren’t readily available at anything close to MSRP.



So we’re left with a couple of ways to look at this: First, you could criticise Nvidia for releasing a marginally better product at a substantially higher price – that happened with the RTX 2080Ti as well but they still sold well. This all stems from the first GPU shortage in 2017, when GTX 1080Ti’s were changing hands for well north of $1000. This set a precedent and sent clear signals that enthusiasts (or desperate gamers) would out-bid miners to get their hands on the current best in class GPUs. Nvidia stopped being shy about the four-figure threshold for their flagship GPUs. Some people see this as being ripped off, others that it’s just a function of market forces. The bad feeling originates from the fact that Nvidia are exploiting market conditions to elevate the price of this GPU by perhaps $200. Remember the cost of GDDR6X VRAM and how halving the quantity likely saves $100 or more in parts cost alone? That ‘saving’ clearly isn’t being handed on to the customer here.

You can also look at products like the RTX 3080 Ti as a luxury good: they clearly are. But like an expensive watch, car, or handbag the price isn’t justified in any way by the features of the product. They’re prestige items, as much about proving you can afford ‘the best’ as actually need the performance. When viewed like this objective metrics break down: No-one cares that the latest limited edition Porsche is just 0.1 seconds faster to 60 for an additional $50,000. They just want the fastest Porsche, and there will still be a waiting list to buy them. 

How you feel about this is likely down to your own personal assessment of value. It’s absolutely gutting at the moment that products like this exist when there are no affordable options for gamers. If this card existed alongside $450 RTX 3060 Ti’s and $700 RTX 3080’s, it wouldn’t feel like such an egregious situation. It’s the fact that people feel compelled to spend this amount just to get a card that leaves a  hint of exploitation in the air. In short, you should only consider the RTX 3080 Ti if money literally isn’t a thing to you – in which case, presumably the RTX 3090 is also in reach. But that 24GB VRAM is still wasted on games.

I absolutely love the way this card performs in VR, in the most demanding titles, at high resolutions. I absolutely hate the price and the state of the market. Hopefully, the market will correct in time and we are seeing signs of that already. So unless you absolutely need a card of this calibre right now, my advice would be to wait – prices are only going to come down from here.  

The post Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/reviews/nvidia-rtx-3080-ti-review/feed/ 0 808586
How to Lower GPU Temperatures https://premiumbuilds.com/tips/how-to-lower-gpu-temperatures/ https://premiumbuilds.com/tips/how-to-lower-gpu-temperatures/#respond Thu, 27 May 2021 21:53:17 +0000 https://premiumbuilds.com/?p=807928 Your GPU is likely one of, if not the, most expensive parts in your computer. High temperatures while gaming or performing high-performance tasks can mean a shorter lifespan and reduced performance, so keeping it under control is a worthy endeavor. High temperatures can even lead to thermal throttling and cause sudden FPS drops in games.… Read More »How to Lower GPU Temperatures

The post How to Lower GPU Temperatures appeared first on PremiumBuilds.

]]>

Your GPU is likely one of, if not the, most expensive parts in your computer. High temperatures while gaming or performing high-performance tasks can mean a shorter lifespan and reduced performance, so keeping it under control is a worthy endeavor. High temperatures can even lead to thermal throttling and cause sudden FPS drops in games. Most GPUs can safely operate at high temperatures – some even as high as 90°C – but reducing heat as much as possible is still a good idea.

Common wisdom says that graphics card temperatures are optimal around 50 – 60°C under load, and ideally do not reach above 70°C. You can check your graphics card’s user manual for specifics if you are interested. Remember, temperatures are always going to be higher while doing more intense tasks or playing more graphically intense games.

There are plenty of ways to check how hot your GPU is getting while gaming. Some modern games include in-game live statistics that overlay over gameplay so you can easily track everything. Otherwise, you will have to rely on a third-party application like MSI’s Afterburner or Open Hardware Monitor to see operating temperatures. These often have the added benefit of increased tracking statistics, too.

Let’s explore some of the common reasons why your GPU may be facing increased temperatures and how to fix them.

Related: Recommended tools to stress-test your PC
Related: How to maximize your gaming PCs performance


Why Do GPU Temperatures Rise?

When computer parts draw power, they generate heat. The more intense the program, the more power is drawn, increasing the heat. Computer parts and cases are designed with this in mind, and with proper cooling in place, heat is not an issue. However, when cooling in the GPU or case fails, temperatures raise much quicker.

Some of the most common reasons why GPU temperatures rise unnecessarily include:

  • Dried thermal paste
  • Dust-filled fans
  • Poor airflow
  • Unclean graphics card

Some of these reasons overlap with general heat issues in a computer, while others – like dried out thermal paste and a dirt graphics card – are more specific to GPUs. In this article, we will focus on GPU specific fixes. If you want more information on some general heat tips for your computer or CPU, including a focus on improving airflow, you can check out this article here.

For a quick overview, however, here are some basic tips:

  • Clean your computer and maximize airflow
  • Keep your computer case closed and efficient
  • Use enough fans in a proper configuration

If improving the general airflow and cooling of your case does not produce your desired results, try the following to reduce GPU temperatures directly.


Tip #1: Clean Your Graphics Card

RTX 2070 Super Founders Edition Ghost S1

A quick clean of your graphics card, especially its fans and heatsink, can be a great way to improve your GPU temperatures. If you are experiencing only mildly high temperatures, there is no need to take apart your graphics card, making this a great first step.

GPU fans are meant to push away hot air from the card, while the heatsink brings up heat from the chip. In modern cards, the fans are often situated directly above the heatsink to make this a more efficient workflow. As time goes on, dust and grime naturally build up and cause hot air to get trapped.

A quick cleaning of your card is easy. Safely remove the card from your computer and disconnect any wires. You may already notice some dust build up on the back of the card and fans. If it is visible already, cleaning is almost certainly going to help.

With the card removed, it is time to clean away the dust. You can use a number of different methods to help with this. Cans of compressed air are common, but a clean and soft microfiber cloth is another popular choice. Whatever, the method, it is important to use it safely. Do not tilt compressed air cans upside down and follow all manufacturer directions. Otherwise, you could accidentally spray liquid onto your card and ruin it.

People opting for a soft paintbrush or cloth should be particularly gentle while cleaning. Too much pressure could break or warp something, resulting in costly repairs. For the plastic parts like the outer shell and fan blades, you can also use baby wipes for more effective cleaning. We do not recommend using them on the actual electronics, however.

A quick clean this way takes only a few minutes and can cause drastic improvements in performance. However, if you test after this and do not notice any changes, it is time to move to the next tip. This requires taking apart the GPU for deeper work.


Taking Apart Your Graphics Card

Taking apart your graphics card is required for many other steps to reduce GPU temperatures, including replacing the thermal paste and deep cleaning the card. To reach the chip and hidden areas, you will have to unscrew the heatsink from the card. Many newer, high-end graphics cards also include a backplate on the card that may need to be removed first. Before continuing, it is worthwhile to check your card’s warranty. Removing the heatsink may void the warranty, depending on the manufacturer.

With that said, most heatsinks can be removed without voiding the warranty so long as the card is not damaged in the process. This is also useful if you plan on replacing the stock cooler with a water-cooled option.

After unscrewing the heatsink, gently remove the cooler. There will be a wire connecting the heatsink to the board, so do not pull hard! The goal is to remove the heatsink from the board enough to disconnect the wire. Most commonly, it is in a bottom corner.

With the heatsink gently removed and the wire disconnected, your card should now be in two pieces. This will also expose the GPU’s chip and old thermal paste, with some on the chip and heatsink at the point of connection. You will also likely notice dust build-up which can be cleaned.


Tip #2: Replacing the Thermal Paste

Arctic MX-4 Thermal Paste

With the chip exposed and the old thermal paste visible, you can now replace it. As heat is transferred through thermal paste and age sets in, thermal paste can dry up and become less effective. While it requires a bit more work to replace, this is one of the most sure-fire ways to reduce GPU temperatures.

To start, wipe the old thermal paste off the chip and heatsink. This is best done with a soft cloth and some rubbing alcohol. If the thermal paste is truly dry, you made need to apply some pressure. Remember to be gentle, however – this is the most delicate part of a graphics card.

Avoid cloth or paper that breaks apart easily or leaves fibers behind, as these can influence heat transfer. A fresh microfiber cloth once again is likely the best option. When it comes to rubbing alcohol, the higher the concentration, the better.

Take the cloth and wipe away as much thermal paste as you can. With most of the larger chunks removed, apply the rubbing alcohol directly to the cloth and begin working on the details. Do not overload the cloth; it is important that no extra liquid hits the rest of the board or heatsink. After just a bit of work, it will look shiny and new – a perfect receptacle for fresh thermal paste.

When you are ready to replace the heatsink and close up the card, apply fresh thermal paste. Only a small dot is necessary; it is often more common to overload the paste than not put enough. Most people recommend using about the size of a dime. The pressure from the heatsink will spread the paste around evenly on the chip.


Tip #3: Deep Cleaning the Inside Of The Graphics Card

Once you have removed the heatsink put before putting on fresh thermal paste, you can deep clean the rest of the graphics card. In most respects, this is the same as the quick cleaning done early. A microfiber cloth and a can of compressed air are helpful for cleaning. Some Q-tips or a soft, clean paintbrush may also be helpful for getting in hard-to-reach areas.

Clean out everything you can with the GPU taken apart. Thanks to the disconnected heatsink, you can move it around and reach new angles and areas where dust has built up. This is the most useful part, but do not neglect the card itself. For all areas that are not electronic, you can use clean baby wipes for extra cleaning. Otherwise, stick to the air and cloth.

With everything cleaned, you can start the process of putting it all back together. If you used baby wipes, wait for the non-electronic parts to fully dry before putting them back together. This is also the time to apply fresh thermal paste as outlined above in tip #2.

Connect the wire from the heatsink back to the card and carefully screw everything back together. Take a final look over the card to make sure that no spots were missed, and everything is fit and tight, and you are good to reinstall it!


Summary

There are plenty of ways to improve your GPU temperatures, including making improvements to your general case. While those are important and should be your first step, they may not be enough if problems are long-lasting or severe. Deep cleaning your GPU specifically and replacing its thermal paste are two great options to bring cooling back to respectable levels.

Remember that high temperatures can cause hiccups and slowdowns, reduce the lifespan of the graphics card, and potentially cause other issues throughout the computer. While most cards remain safe until high temperatures – some even going above 90°C – it is best to keep parts below 70°C under load. The more you push the computer with intense tasks like gaming and 3D modeling, the higher temperatures will go.


The post How to Lower GPU Temperatures appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/tips/how-to-lower-gpu-temperatures/feed/ 0 807928
AMD vs Nvidia: Which GPU is best for you? https://premiumbuilds.com/graphics-cards/amd-vs-nvidia/ https://premiumbuilds.com/graphics-cards/amd-vs-nvidia/#respond Fri, 07 May 2021 15:04:58 +0000 https://premiumbuilds.com/?p=807445 The GPU is arguably the most important component in a gaming PC, and your choices start with this simple question: should you choose AMD or Nvidia? Though Nvidia has historically dominated the graphics scene, today the two vendors are perhaps more equally matched than ever before. Still, there are key differences between AMD and Nvidia… Read More »AMD vs Nvidia: Which GPU is best for you?

The post AMD vs Nvidia: Which GPU is best for you? appeared first on PremiumBuilds.

]]>
AMD vs Nvidia

The GPU is arguably the most important component in a gaming PC, and your choices start with this simple question: should you choose AMD or Nvidia? Though Nvidia has historically dominated the graphics scene, today the two vendors are perhaps more equally matched than ever before. Still, there are key differences between AMD and Nvidia GPUs which you should consider before making a decision.

Performance and Pricing

Unlike the prior RX 5000 series, the new RX 6000 series makes a serious attempt to dethrone Nvidia at the very top. Here’s a rough tier list:

GPU Tier ListAMDNvidia
Bragging RightsRX 6900XTRTX 3090
Top EndRX 6800XT, RX 6800RTX 3080
Upper High EndRX 6700XTRTX 3070, RTX 3060Ti
Lower High EndRX 5700XT, RX 5700RTX 3060

It should be noted that this list was created using benchmarks conducted at 1440p and without ray tracing or DLSS or other related settings. The resolution is relevant because the margin between RX 6000 and RTX 3000 GPUs depends on it. As the resolution increases, Nvidia actually gains ground on AMD. I decided to just focus on 1440p since it’s right in between 1080p and 4K. If you want to min-max framerates, AMD tends to do better at lower resolutions and Nvidia at higher. That being said, there’s no reason why you can’t use a 6800XT at 4K or a 3080 at 1080p, since the difference is only a handful of frames either way.

Unlike previous generations, AMD is not positioning itself as a budget alternative; consequently, the RX 6000 series is the most expensive generation of AMD GPUs in a long while. AMD still represents better bang for buck, but barely. There should not be a significant dilemma when choosing either Nvidia or AMD if you’re concerned about value.

Nvidia provides better options in the “Lower High End” tier thanks to the introduction of the RTX 3060. The RX 5000 series is certainly fine but it lacks major features the RX 6000 series introduced. Both vendors, however, cover nothing below $300, which means you have to spend hundreds of dollars for the newest GPUs. If you want to spend less money for less performance, you have to buy an older GPU.

Some other tidbits: AMD tends to offer a little more memory throughout the stack, but it’s not clear whether or not this will actually matter. Nvidia’s power consumption is usually much higher than AMD’s, and given the performance that means efficiency is also lower, sometimes by a wide margin.

Ray Tracing and Upscaling Technology

Ray tracing is the next big thing in gaming, and Nvidia was the first to support it starting in 2018 under the RTX brand. AMD finally introduced ray tracing with the RX 6000 series, but the performance just isn’t as good as Nvidia’s. Both AMD and Nvidia GPUs lose frames when enabling ray tracing, but RX 6000 GPUs lose far more. This isn’t necessarily a critical loss for AMD, however, since ray tracing is supported in very few games and sometimes doesn’t improve visual quality very much. It’s difficult to tell how fast the industry will adopt ray tracing, so I can’t say whether or not AMD’s poorer performance really will matter.

Super sampling and visual smoothing technologies are also the next big thing. Alongside ray tracing support, Nvidia also introduced DLSS (or deep learning super sampling). Like anti aliasing, DLSS takes a lower resolution image and makes it look better. However, just like ray tracing, DLSS is not present in many games because it requires a high level of support from developers and Nvidia. In the future, however, we should expect more games to utilize DLSS because the popular game engines Unreal and Unity support it natively. It appears that developers will soon be able to easily support DLSS, but we will have to wait and see.

AMD, meanwhile, doesn’t really have its own DLSS-like technology. They have been teasing FidelityFX Super Resolution, but there are basically no details about visual quality or performance. All AMD has confirmed is that it will arrive before the year ends and that it doesn’t use machine learning. Though, AMD didn’t need to confirm that second point since we already knew RX 6000 doesn’t have machine learning hardware. AMD does have Radeon Image Sharpening (RIS), but all this is a very primitive feature by comparison.

Software Features

Another distinct advantage Nvidia has on AMD is NVENC. I’ll spare you the technical details on what exactly it is, all you need to know is that Nvidia GPUs usually record game footage at a higher quality than AMD GPUs. You aren’t required to use Nvidia’s first party Shadowplay software to use NVENC, too. Third party programs like OBS (one of the most popular recording and streaming applications) support the feature.

AMD does however have two unique features which Nvidia has never tried to emulate: Radeon Chill and Radeon Boost. Radeon Chill was made back in 2016 when AMD GPUs were very power inefficient. It lowers performance (and by extension, power consumption) when you’re not moving much. If you’re really concerned about power efficiency, it might be a worthwhile feature. Radeon Boost was introduced in 2019 and it is more broadly applicable. Radeon Boost will dynamically lower the resolution when you move in order to increase the framerate. Of course, when you’re moving around, you probably won’t notice the resolution decreasing. Sadly, both features don’t work in every game. Chill apparently works in “most games” except for Windows store games, and Radeon Boost only works in a dozen or so games.

G SYNC and FreeSync

I should also mention G SYNC and FreeSync. These are anti-screen tearing technologies and back when they were introduced, they were fairly simple first party standards. Nowadays, there are multiple versions of each. There are two versions of G SYNC: the proprietary one using an FPGA (which increases costs significantly) and the one based on Adaptive Sync, another anti-screen tearing technology which is vendor agnostic. FreeSync on the other hand is easily the worst offender for different versions, however; there is regular FreeSync, FreeSync 2 (which is seemingly discontinued), FreeSync Premium, and FreeSync Pro. Generally speaking, the different categories reflect not the quality of the technology, but the monitor. They’re like those HDR labels monitors have been shipping with recently. Both technologies do the same thing more or less.

Finally, the race is also even when it comes to recording game footage. Both vendors offer GPU based recording software: ReLive for AMD, Shadowplay for Nvidia. It used to be the case that AMD didn’t even have a Shadowplay alternative, but some years ago ReLive was finally introduced. Besides for NVENC, Nvidia doesn’t have a significant advantage over AMD for recording games anymore.

Driver Suite

This section focuses not on driver stability or performance, but the driver suite UI and features. Both Nvidia and AMD of course offer first party software so that you can customize your experience. Many of the above features can only be enabled or customized through these suits.

AMD’s driver suite requires no login of any kind and has a plethora of settings. However, most of the graphics settings don’t appear to be very useful. The only ones I would adjust would be tessellation and Enhanced Sync (which is a software mitigation for screen tearing). There are also some color and display related settings I also have never seen any reason to use, except for FreeSync and Virtual Super Resolution. On the other hand, there are tons of useful settings like those for ReLive, Radeon Boost, Radeon Chill, and Wattman, a built in overclocking tool which I personally really like, though not everyone feels the same way. Though AMD’s suite could use a trim, overall it’s quite useful.

When it comes to UI design, AMD’s driver suite looks good and performs well. It has actually been a primary goal of AMD’s since 2015 to deliver modern looking driver suites; you might even think AMD has gone a little overboard. Browsing from menu to menu and from option to option is fast and happens at a high framerate. There are a few themes to choose from and they all look pretty nice.

Nvidia, on the other hand, has gone a very different direction. When you install Nvidia drivers, you get two choices: install just the driver suite or install the driver suite with GeForce Experience. I’ll get to GeForce Experience in a moment, but first, the driver suite. It is atrocious. It really looks like it hasn’t been redesigned since the XP days. When going from one menu to another, or changing a setting, there’s usually a good second or two of lag. How could there be lag when this UI is so barebones?

But perhaps the worst thing about the basic driver suite is that you don’t get all of the features and settings. To use features like Shadowplay and Ansel, you need GeForce Experience, which is more of a media oriented extension of Nvidia’s driver suite. It requires you to login with an account which is pretty annoying, even if it’s only once. But if you don’t want Shadowplay or Ansel or the other media features, GeForce Experience is almost entirely useless. GeForce Experience also doesn’t have the same customization options as the base UI, because it’s not a replacement.

AMD is the clear winner when it comes to driver usability, though this probably won’t matter to most people, just those who like to tinker around with their GPU.

So, which should you go with?

It’s been about 15 years since the PS3 and the Xbox 360 faced off, and I can’t help but draw parallels between that debate and this one. It’s really hard to choose between AMD and Nvidia because they’re so close. Neither one holds a significant advantage over the other. However, Nvidia has a significant head start on ray tracing and upscaling technology, which will be important in the future. But let’s consider one thing: ray tracing and DLSS are still uncommon features, and we’re already into the second generation of ray tracing GPUs. By the time ray tracing and DLSS are truly industry standard features, we might already have RTX 4000 and RX 7000 GPUs. In a couple of years, perhaps AMD will catch up to Nvidia or even beat them.

Whether you go with AMD or Nvidia, you’re getting a similar experience overall. If you have very particular needs, then you might find that one is really much better than the other. Most people, however, won’t be able to tell the difference.

Sources:

  1. AMD Radeon RX 6900XT Review“, by Techspot.
  2. AMD Radeon RX 6900 XT Review – The Biggest Big Navi“, by TechPowerUp.
  3. AMD Radeon RX 6700XT Review“, by Techspot.
  4. AMD Radeon RX 6700 XT Review“, TechPowerUp.

The post AMD vs Nvidia: Which GPU is best for you? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/graphics-cards/amd-vs-nvidia/feed/ 0 807445
What is AMD Smart Access Memory – And Is It Worth It? https://premiumbuilds.com/what-is/amd-smart-access-memory/ https://premiumbuilds.com/what-is/amd-smart-access-memory/#respond Wed, 21 Apr 2021 18:24:28 +0000 https://premiumbuilds.com/?p=807438 Despite being diehard PC fans, we believe that the best way to begin this article is with a video from a console title. Specifically, one of the first “boss fight” in the PlayStation 2 classic God of War 2:  This might not look impressive by today’s standards, but it was awe-inspiring at the time of… Read More »What is AMD Smart Access Memory – And Is It Worth It?

The post What is AMD Smart Access Memory – And Is It Worth It? appeared first on PremiumBuilds.

]]>
What is AMD Smart Access Memory

Despite being diehard PC fans, we believe that the best way to begin this article is with a video from a console title. Specifically, one of the first “boss fight” in the PlayStation 2 classic God of War 2: 

This might not look impressive by today’s standards, but it was awe-inspiring at the time of release. More powerful consoles were available (OK, the Xbox). The almighty PCs already had much faster CPUs, heaps of RAM, and more advanced dedicated GPUs. Still, both were struggling to produce similar combinations of graphical fidelity, full-screen motion of high-poly models, and dynamic lighting. 

However, what’s really impressive is that this 14-year-old game was running on over 20-year-old hardware. For, yes, it’s been over two decades since the PlayStation 2 was released. Why are we ranting about consoles and their games? When will we touch on the actual topic at hand? Well, we already did! 

One of the primary reasons God of War 2 was possible on the lowly PlayStation 2, whose GPU ran at around 145 MHz and CPU at 299 MHz, was that they both had access to the same RAM. The same is true for many of the consoles that followed. It’s one of the reasons their frozen-in-time hardware manages to keep up with the ever-evolving hardware in our PCs. And that’s precisely what AMD (and “friends”) bring to PCs with Smart Access Memory. 

Have we piqued your interest? Let’s dive deeper and see what this means for our computers and, more importantly, gaming on them. 

What is Smart Access Memory? 

Have you ever heard the famous quote “640 KB ought to be enough for anybody” that’s misattributed to Bill Gates? Well, just like old PCs had to deal with that limitation, it turns out that a similar bottleneck existed in our PCs for years. 

Although it’s been eons since GPUs broke the GB barrier, our CPUs can still only access up to 256 MBs of that memory. It would be simplistic to say that out of your GPU’s multiple GBs of VRAM, the CPU can only “use” 256 MBs. However, in some regards, it’s true. 

When playing a game, both the CPU and GPU work to produce what we see on our screens. If they both need to work on the same 1 GB of data, the bottleneck in the communication between them means that the CPU can only “push” 256 MBs of data to the GPU and store the rest in system RAM. When the GPU is done with them, it has to ask the CPU to fetch more data from the system RAM. Rinse and repeat. 

This bottleneck restricts the amount of data on which our PC’s CPUs and GPUs can, for lack of a better term, “collaborate”. On top of that, the continuous back-and-forth introduces lag and puts some extra load on the CPU.

AMD’s Smart Access Memory (SAM) is (part of) the answer to that problem. SAM was initially advertised as a feature of AMD’s Ryzen 5000 CPUs. Soon after, AMD announced it would also bring the feature to Ryzen 3000 CPUs. However, SAM also demands a 500-series motherboard and either an RDNA2 GPU by AMD or an Ampere GPU by Nvidia. Yes, Nvidia. 

Despite being advertised as a Ryzen feature, SAM isn’t an AMD-only solution. Quite the opposite, SAM is AMD’s take on what’s best known as Resizeable Address Register or, for short, Resizable BAR. What might sound strange, though, is that Resizable BAR isn’t something new. It’s even older than the PlayStation 2 we talked about earlier, and a part of the PCI Express 3.0 standard from back in 2008. 

Why Should You Care? 

Admittedly, we exaggerated a bit. PlayStation 2’s unified memory approach wasn’t the sole reason God of War 2 looked so impressive on it. However, without it, games like God of War 2, Shadow of the Colossus, or Metal Gear Solid 3 would be impossible on Sony’s ancient by today’s standards console. 

A unified memory approach means that both the CPU and the GPU can access the same RAM and work on the same datasets. This immediacy eliminates the push-pull relationship they had in the PC space up to now. 

The result should be increased framerates and maybe even perceptibly reduced stuttering/latency on resource-intensive modern titles. Still, that’s for current titles, and increased framerates are the given but not sole benefit. Future games that take advantage of the closer, quicker collaboration between CPU and GPU, could also show more impressive graphics. 

Yes, we know it sounds too vague, too general. That’s because the only examples we’ve had up to now of how such a feature could be best utilized was from the world of consoles. It’s a new approach for PCs, and how (and if) each dev will take advantage of it is up to them. 

Does It Work? 

Although we presented it as such, we admit we lied: SAM/Resizable BAR isn’t an equivalent to unified memory. It’s not like the system RAM and GPU’s VRAM suddenly become a whole from which the developers can choose how much memory to allocate to either depending on the workload. It’s also different than SoCs and embedded GPUs, where it feels as both the CPU and GPU are fighting for the same piece of memory pie. 

SAM/Resizable Bar allows the CPU direct access to the GPU’s VRAM through a virtual tunnel created between them. Its sole purpose is to minimize the barriers between them, not to combine their different memory types in one pool. Its goal is to enable more efficient communication between them, leading to increased performance. And, from what we’ve seen up to now, it delivers. 

People all around the Internet, from trusted outlets like Techspot and PC Gamer to users on Reddit, claim performance gains that in some cases exceed 10%. Not bad for a feature that, despite AMD’s original advertising, will soon be an industry standard, available to everyone.

Still, you might have noticed we said “in some cases”. That’s because the 10% improvement is not a given. Just like in many cases SAM/Resizable BAR lead to improved performance, in many others they don’t lead to any visible difference. Even worse, some report performance regressions on titles like Battlefield V (on DirectX 11 and Ultra Quality) when enabling SAM at lower-than-4K resolutions.

What About Nvidia/Intel Fans? 

AMD was at the privileged condition of controlling all aspects of the Resizable BAR equation for its hardware: CPUs, GPUs, and motherboards. So, although Resizable BAR “was a thing” for decades, it lay dormant, unused, waiting for the stars to align.

With AMD releasing the cat from the proverbial bag, everyone else jumped on-board. They might not have adopted AMD’s new name, but the feature is the same, and all hardware that supports it is compatible. 

You can take advantage of Smart Access Memory on all-AMD systems with modern motherboards, Ryzen CPUs, and Radeon GPUs. However, you can also use Resizable BAR on an Intel-based system with either an AMD or Nvidia GPU. The results are similar for both AMD-based and “hybrid” setups: many games see a small but sometimes perceptible improvement in performance. Even more titles don’t, and the difference is negligible. And there are also the admittedly more fringe cases of games where the performance drops (although not significantly) after enabling the feature.

As for why Smart Access Memory is presented as a golden pill that can work as a universal solution, the reason is almost fully irrelevant with the tech underneath. Instead, it’s defined by a single word: marketing.

Is It Worth It? 

There’s no reason to ponder over the usefulness of Smart Access Memory. The feature has already been considered A Good Idea, accepted, implemented in the PCI Express 3.0 standard, and… Well… 

That was where its story ended, the point in time where this Good Idea went into hibernation. Like a digital fairy-tale, decades later, AMD found itself in the perfect position to wake the sleeping beast. 

Do note, though, that SAM/Resizable BAR isn’t a switch that you flip and everything goes faster. The increase in performance isn’t a constant. There are cases where Smart Access Memory/Resizable Bar doesn’t seem to affect a title drastically. Cases where there’s only an imperceptible single-digit framerate difference. 

We’re sure that in the future, the situation will change. More modern titles will take advantage of the faster collaboration between CPU and GPU to produce stunning visuals. For current titles, though, it’s more of a nice-to-have bonus that gives a slight boost to framerates. Not an earth shattering must-have new feature for which you should rush to upgrade your rig.

The post What is AMD Smart Access Memory – And Is It Worth It? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/what-is/amd-smart-access-memory/feed/ 0 807438
Can’t Find a Graphics Card In Stock? Here’s Why https://premiumbuilds.com/features/why-are-graphics-cards-out-of-stock/ https://premiumbuilds.com/features/why-are-graphics-cards-out-of-stock/#comments Tue, 20 Apr 2021 12:55:35 +0000 https://premiumbuilds.com/?p=807393 If you’ve been eyeing to build your own gaming or workstation PC, you may have asked yourself where in the world all of the graphics cards have gone. If you look up GPUs in Amazon or Newegg, you’ll be greeted with hundreds of listings that are almost all sold out. The few that are in… Read More »Can’t Find a Graphics Card In Stock? Here’s Why

The post Can’t Find a Graphics Card In Stock? Here’s Why appeared first on PremiumBuilds.

]]>
why is there a graphics card shortage

If you’ve been eyeing to build your own gaming or workstation PC, you may have asked yourself where in the world all of the graphics cards have gone. If you look up GPUs in Amazon or Newegg, you’ll be greeted with hundreds of listings that are almost all sold out. The few that are in stock have had their prices hike up by hundreds of dollars. The reason for this is none other than the fact that the industry is currently facing the largest GPU shortage in history.

In a completely unprecedented sequence of events, the world— which underwent a harrowing series of trials and tribulations last year— is facing extreme supply issues in many industries. When it comes to graphics cards (or lack thereof), the situation has only gotten worse since. Even low end and midrange GPUs like the GTX 1650 has seen inflated prices that are up to 300 to 400 percent of its MSRP.

Industry analysts and insiders have come forward and said that the graphics card shortage boils down to consumers’ demand exceeding the current supply. Both Nvidia and AMD have stated that the current demand for GPUs has exceeded their expectations, mostly due to the generational leaps in performance that both Ampere and Navi cards brought along with their releases.

However, demand exceeding supply is not unusual for the PC hardware industry. It’s the fact that manufacturers are unable to produce new graphics cards in order to match the exceeding demand that has got everybody in a rut. Thus, the question on everyone’s mind is: why are supplies so low?

There’s more than one answer to this question. Its mostly due to the triple threat of scalpers, cryptocurrency miners and the global silicon shortage that has affected the electronics industry’s need for semiconductors.


1. Scalpers

Whenever there’s a highly anticipated product launch, you can expect to find scalpers prowling at the front of the line. A scalper is a term given to people who buy freshly launched products and sell them on eBay or StockX at highly inflated prices to make a profit. Its usually not much of a threat to face one scalper who works as a lone wolf by snagging a few RTX 3080s to sell at 200 percent of its MSRP. However, the last few months of 2020 brought out scalpers by the droves out of the woodwork. Many often work as groups to buy thousands of products in bulk in order to sell them at ungodly markups.

Scalpers were especially notorious during the launch of the long-awaited PS5 and Xbox Series X/S.

In an article written by PCMag in January, it was reported that scalpers had sold nearly fifty thousand RTX 3000 series graphics card since their launch on eBay and StockX. In a report submitted by Michael Driscoll, the total in sales amounted up to $61.5 million. He reported that scalpers made off with an estimated $15.2 million in profits during this time. The fear of missing out on new product launches run rampant among consumers, causing them to spend copious amounts of money to buy these products from scalpers.

To add to the problem, the launch of the RTX 3000 and AMD RX 6000 series of graphics cards saw markups on their MSRP by the manufacturers and AIB partners as well. In late 2020, the Trump administration—as part of their trade war with China— imposed tariffs of up to 25 percent on motherboards and GPUs that were being shipped from mainland China. Notable board partners such as EVGA and Zotac were shown to have increased the MSRP of their graphics cards by $70-$100. AMD CEO Dr. Lisa Su claimed that the company is trying their level best in order to keep prices of their cards down. In a quote transcribed by AnandTech, she claimed:

“We knew about the expiration of some tariff policies, and in advance worked towards a more flexible supply chain as it relates to AMD. We are committed to keeping GPU pricing as close to our suggested retail pricing as much as possible, because it’s the only way to be fair to the users. Normally when we have GPU launches, our own branded cards are available initially but then fade away for our partners to pick up. This time around we’re not phasing out our RX 6000 series, enabling us to sell direct to customers as low as possible. We’re encouraging partners to do the same. Not only tariffs, but the COVID environment has increased shipping and freight costs, which are hard to avoid. As we get into a more normal environment, this should improve. This also matters for our planned graphics updates through the first half of the year, as we have a lot of product coming to market.”


2. Crypto Miners

Bitcoin mining GPU shortage

On the other side of the GPU shortage coin exist the cryptocurrency miners. 2020 saw a huge boom in crypto prices; notably Bitcoin and Ethereum, which prompted miners to dust off their mining rigs and start making money. The situation was similar back in 2017 when Bitcoin prices surged and miners caused a mass graphics card shortage. Towards the end of last year, Bitcoin prices exploded upwards of $35,000, a record high for Bitcoin that was never seen before. Similarly, Ethereum also jumped to $400 during 2020, and even managed to spike over $1000.

The previous crypto rush ended in 2018 when the market inevitably crashed. This resulted in the GPU market normalizing with demand dying down and the majority of miners flooding the second-hand market with graphics cards used for mining in order to recoup any operational costs. Consumers are hoping for a similar sense of normalcy during the current mining boom, but we would be hard pressed to have faith in such sentiments due to the state of things as they are now.

While many gamers and other consumers were ticked off that crypto miners seemed to have dibs on the new GPUs, Nvidia thinks otherwise. At the Raymond James Institutional Investors Broker Conference call held on March 1st, Nvidia CFO Colette Kress said that she does not believe that the current graphics card shortage is caused by miners. She went on to say that the supply constraints on Ampere along with record numbers of demand for the cards are there main driving factors. 2020 brought along the “work from home” lifestyle and new gamers and other users suddenly needed new PCs. Kress also said that gaming popularity has skyrocketed by Q1 2021, and thus the supply issues would have still persisted even without the impact of cryptocurrency miners.

In an attempt to reduce the crypto miners’ demand on consumer GPUs, Nvidia unveiled a new line-up of CMP HX mining graphics cards. These have no display outputs and serve solely to mine cryptocurrency. Moreover, Nvidia took steps to ensure that their midrange RTX 3060 was not enticing to miners. According to Nvidia, the RTX 3060 contains a BIOS and hardware level lock that halves the hashrate of the graphics card so that it won’t be efficient in mining crypto. An Nvidia spokesperson called the BIOS “unhackable” and stated that they were confident that miners will look elsewhere for a mining focused card. However, this was short-lived as an allegedly leaked driver update re-enabled the RTX 3060’s mining capabilities, and that solution fell dead almost as soon as it was implemented. However, reports still suggest that Nvidia’s upcoming RTX 3080Ti will also have the mining limiter installed.

There were concerns raised by enthusiasts on the effect the CMP cards would have on the resale market of GPUs. One of the main contributing factors that allowed the market to return to normalcy after the 2017 crypto boom was that miners ended up selling hundreds of GPUs that ended up in the hands of gamers. With the CMP HX line-up of cards only being able to mine, without being able to game on them, the second-hand market would dry up, leaving consumers with no choice but to wait until Nvidia themselves release enough graphics cards to sustain demand.

AMD on the other hand, went on record to say that there will be no such limits on any of their RX 6000 GPUs.


3. The Great Global Semiconductor Shortage

Semiconductor Shortage

Even if the scalpers and crypto miners disappear back where they came from, the final hindrance to the world’s GPU supplies still persists— which is the global silicon shortage. The availability of semiconductors has been steadily worsening during the latter half of 2020. During the initial days of the pandemic, supply constraints were only seen as a temporary setback as factories and supply chains had to close due to COVID regulations.

But when the cogs of the silicon industry began turning again, the world was in a completely different state from when they left. The sudden changes in lifestyle brought on by the global pandemic meant that people were reliant on electronics and computers to a far greater degree than before. The sudden surge in demand for semiconductors took the industry by surprise. Even tech behemoths like Apple were not prepared for the oncoming shortages, and had to postpone the launch of the iPhone 12 by two whole months.

The global silicon shortage also dealt a giant blow to the automotive industry. Ford reported that their loss in profits will be an estimated $2.5 billion. The inability to obtain enough semiconductors meant Nissan had to hold off on the opening of new plants. General Motors also reported a hit to profits upwards of $2 billion US Dollars.

This sudden shortage in silicon occurred during a transition phase and launch of a new generation of game consoles. Sony and Microsoft came forward and revealed that their profit targets will not be met due to the lack of availability of the PS5 and the Xbox Series X and Series S.

Even Samsung, one of the largest semiconductor manufacturing powerhouses of the world, is considering having to delay the launches of their next flagship smartphone. This is quite possibly the biggest piece of evidence of how dire the situation currently is. Not to mention the fact that Samsung is in charge of producing semiconductors for Nvidia’s new Ampere line-up of graphics cards.

The numbers are there to back up these claims too. According to the Semiconductor Industry Association (SIA), the sales figures for semiconductors in January 2021 was $40 billion, which was 13.2 percent higher than that of the previous year.

Industry analysts who have been observing the developing situation has noted that the demand for silicon will continue to exceed supply at least until end of the Summer. If things take a turn for the worse however, they estimate that the shortage could last well into 2022.

In attempts to mitigate the lack of silicon and boost the rates of production, the SIA has urged the Biden administration to pave way for the opening of semiconductor plants within the US. Following this, President Biden announced that an allocation of $37 billion US Dollars will be put towards achieving that goal. Once these plans have come into fruition, the current semiconductor shortage is projected to improve and even normalize. But these are all long-term goals that will at best take between two to three years to be fully implemented. Can the electronics industry hold on till then? Or will the situation truly get worse before it can even hope to get better?


4. The State of the World As it is Now

So, what’s the bottom line in regards to the availability of graphics cards (or lack thereof)?

In January, Nvidia CFO Colette Kress announced that the current shortage would last until the end of the first quarter of 2021; which ends in April. Evidently, there seems to be no end in sight to the lack of GPU supplies just yet. AMD also reported in January that their RX 6000 series of cards will be readily available to purchase through their own website. This statement was almost instantly invalidated as each card went out of stock seconds after they were made accessible for purchase. AMD in particular has been called out for making empty promises on the availability of their new products, with fans even calling them out for having “paper launches” without any actual stock available.

In an updated statement by Kress, she revealed that the shortage of RTX 3000 cards will persist throughout the rest of the year. She went on to say that the unprecedented demand for the Ampere line-up of GPUs meant that they sold out twice as fast as Turing did during the same timeframe. However, Kress did say that Nvidia expects availability of RTX 3000 cards to gradually rise throughout the year; but it still would not exceed or meet demand. If Nvidia’s projections hold true, you might be able to see Ampere cards in stock more often than before. But we won’t be expecting the situation to return to normal any time soon.

As it stands now, you would be extremely fortunate to be able to snag a graphics card at MSRP. If you’re in the middle of building your PC, and every component minus the GPU is already in your hands; you’re in for some tough luck. And for those of you who want to upgrade from your old GTX 1080 to a RTX 3080, consider yourselves very lucky to even have a functional GPU in your hands.

However, if you’re in dire need of a gaming or workstation PC at the moment, you will be better off going the prebuilt route (yes, we said it). Check out some of our prebuilt PC guides below to find yourself a computer during these stormy times.


The post Can’t Find a Graphics Card In Stock? Here’s Why appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/features/why-are-graphics-cards-out-of-stock/feed/ 1 807393
AMD RX 6700 XT vs Nvidia RTX 3060 Ti: Which is Best? https://premiumbuilds.com/comparisons/amd-rx-6700-xt-vs-nvidia-rtx-3060-ti/ https://premiumbuilds.com/comparisons/amd-rx-6700-xt-vs-nvidia-rtx-3060-ti/#respond Fri, 19 Mar 2021 11:48:31 +0000 https://premiumbuilds.com/?p=806295 It’s tough times for PC builders. We’ve been told new GPUs exist, we’ve even seen some of them, but if you want to buy one… you can’t unless you’re dripping in either luck or money. With very high demand, supply, and distribution issues, and another round of crypto-mining madness, availability is as low as prices… Read More »AMD RX 6700 XT vs Nvidia RTX 3060 Ti: Which is Best?

The post AMD RX 6700 XT vs Nvidia RTX 3060 Ti: Which is Best? appeared first on PremiumBuilds.

]]>
rx 6700 xt vs rtx 3060 ti

It’s tough times for PC builders. We’ve been told new GPUs exist, we’ve even seen some of them, but if you want to buy one… you can’t unless you’re dripping in either luck or money. With very high demand, supply, and distribution issues, and another round of crypto-mining madness, availability is as low as prices are high.

Into the midst of this maelstrom, AMD has announced another new GPU: The Radeon RX 6700 XT launches on 18th March. This RDNA2 based card slots in below the exiting RX 6800 XT and RX 6800 and looks set to challenge the RTX 3060 Ti as a strong 1440p GPU priced at about $479 MSRP. In this article, we’ll take a look at the specifications and consider the relative strengths and weaknesses of the RX 6700 XT compared to the competition – including the RTX 3060 Ti.


Specification Comparison

 RX 6700 XTRTX 3060 TiRX 6800RX 6800 XT
DesignAMD RX 6700 XTNvidia RTX 3060 Ti Founders EditionAMD RX 6800AMD RX 6800 XT
Price (MSRP)$479$399$579$649
Fab ProcessTSMC 7nmSamsung 8nTSMC 7nmTSMC 7nm
ArchitectureRDNA2 / NAVI 22Ampere / GA104RDNA2 / NAVI 21RDNA2 / NAVI 21
Clock Speeds (Base/Boost)2321/2424MHz (unconfirmed)1410/1665 MHz1815MHz/2105MHz2015MHZ/2250MHz
Shader Units2560486438404608
Compute Units40806072
RT Cores40386072
Infinity Cache96MBN/A128MB128MB
Memory capacity12GB GDDR68GB GDDR816GB GDDR616GB GDDR6
Memory Bus/Bandwidth192bit / 384.0 GB/s256bit / 448 GB/s256Bit / 512 GB/s256Bit 512 GB/s
Power (TDP)230W200W250W300W
Pixel Rate165.2 GPixels/s133.2 GPixels/s202.1 GPixels/s288.0 GPixels/s
AvailabilityAmazonAmazonAmazonAmazon

Performance: RX 6700XT (by a small margin)

AMD RX 6700 XT

The RX 6700 XT is the first GPU to use the ‘NAVI 22’ core, still based on the new RDNA2 architecture. This core uses 2560 shader units, grouped into 40 compute units, and has 40 Ray Tracing cores as opposed to the 60 or 72 cores of the 6800 and 6800XT respectively. However, the core itself runs significantly faster, with boost clocks approaching 2.5GHz which is likely to make up for some of the deficit in performance attributable to the lower unit count. It also makes use of AMD’s ‘infinity cache’ – 96MB of on-chip memory for the most frequently accessed data that boosts performance by reducing memory latency and this is cut down from the 128MB on the more powerful RDNA2 cards.

The RX 6700 XT is hardware ray tracing enabled, although RTX exclusive titles like Cyberpunk aren’t ray tracing enabled on AMD GPUs – all titles using DirectX12 Ultimate will permit hardware ray tracing. It’s not possible to directly compare clock speeds or compute units between AMD and Nvidia, but looking at relative performance, the RX 6700 XT performs equivalently to the RTX 3060 Ti, averaging around 2% faster across a range of titles.  That means it is a very capable 1440p card and is a star performer at 1080p, but like the RTX 3060 Ti, it’s an expensive option for 1080p gaming


Memory: 12GB GDDR6 192bit bus Vs 8GB GDDR6 256 bit bus

Whilst the higher performance RDNA2 cards get a massive 16GB VRAM, the RT 6700 XT gets a still large 12GB frame buffer. It uses GDDR6 RAM running at 16GB/s, accessed via a narrower 192 bit bus meaning memory access performance will not be at the same level as the higher tier cards. The RTX 3060 Ti meanwhile makes do with 8GB GDDR6 but accesses it via a faster 256bit bus. 12GB is ample VRAM for a card with the anticipated performance of the RX 6700XT, meaning there will be no constraints on texture size for the foreseeable future other than the GPU’s ability to manipulate them. This VRAM size is a function of that 192bit bus: This allows access to six memory chips which are available in either 1GB or 2GB sizes. Since 6GB is no longer adequate for a card of this performance, 12GB is the only option for AMD to make a competitive product. Overall, the memory performance of the RTX 3060 Ti is faster, helping performance, whilst the RX 6700XT has higher capacity which may extend how long the card remains relevant. We’ll call this a tie.  


Features – AMD bring all the toys to the party

AMD GPUs come with their own suite of features that help boost performance and get you the most out of your card. RDNA2 are the first generation of AMD cards to feature hardware ray tracing cores, enabling shadows, lighting and reflections rendered in real time with actual ray paths, not approximated ones. However they’re not as powerful as Nvidia’s second generation cores, with performance on a par with the first-gen RTX cores found in the 20 series Nvidia GPUs. Nonetheless they will allow advanced rendering effects in ‘DirectX 12 ultimate’ titles. There’s also ‘Rage mode’ which is an automatic overclock to wring every last ounce of performance out of the card. Finally AMD have ‘SAM’ or Smart Access Memory which allows the CPU direct access to the GPU’s VRAM, enhancing performance in some games but around 5-10%.  Be aware that it can also hurt performance in others – it’s specific to the way the game or engine is coded, so only use it on compatible titles. Nvidia has Resizeable BAR (ReBAR) which performs the same function but is only enabled on games where it brings a performance benefit.


Power and Efficiency: Evenly matched

The RX 6700 XT has 2 8-pin power connectors on the card and draws 230W under load and 250W when overclocked. The RTX 3060 Ti has a TDP of 200W and draws 220W in normal usage or when overclocked, the power limits are carefully constrained to prevent it encroaching on the RTX 3070s performance. In terms of efficiency, the power levels are as closely matched as performance, with a few watts more equating to slightly higher performance for the RX 6700 XT. In short, power draw and efficiency are equivalent between these two GPUs and not a reason to choose one over the other. Any reasonable quality 600W+ power supply will be equipped to handle these GPUs, and you won’t need to go overboard on case cooling to keep system temperatures down to acceptable levels. The modest power draw also bodes well for manufacturers being able to design quiet running versions of the RX 6700 XT.


Supply and availability: Likely to be very hard to buy…

There’s still a large question mark over supply and availability: AMD share the TSMC 7nm production lines used to create this GPU with their RX6800 and RX 6800 XT, the entire Zen 3 CPU line up, and the APUs that go into current generation consoles. All of this points to supplies being highly limited. After the initial batch are sold we’d expect the supply situation to regress to the same as it is for any other GPU right now: Sadly almost impossible to find at MSRP. This is also the situation with the RTX 3060 Ti.


Conclusion: is the RX 6700 XT better than the RTX 3060 Ti? 

AMD RX 6700 XT

Given the factors we’ve discussed above, the RX 6700 XT looks like a viable alternative to the RTX 3060 Ti. It performs as well or better than it in almost all titles, allows you to use hardware raytracing, and has a large VRAM capacity meaning it will stand the test of time if you intend on keeping it for the long haul. Given the RTX 3060 Ti’s slight edge in RTX and DLSS enabled games at this time, we’d rate these two GPU’s as broadly equivalent in terms of the value they offer.

There’s also the interesting discovery that AMD drivers are much more effective at high frame rates and with lower performance CPUs, imposing less of a CPU overhead in those situations: The RX 6700 XT may be the perfect choice if you’re upgrading your GPU and have a system a generation or two old.

The caveats around the RX 6700 XT are more general: It’s relatively expensive and that price is likely to rocket on release, and supply is bound to be scarce. Ultimately you may not have a free choice between the two cards, in which case either will perform excellently in a 1440p or high-FPS 1080p gaming rig.


The post AMD RX 6700 XT vs Nvidia RTX 3060 Ti: Which is Best? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/comparisons/amd-rx-6700-xt-vs-nvidia-rtx-3060-ti/feed/ 0 806295
5 Best RX 6800 XT Aftermarket Cards for 2021 https://premiumbuilds.com/guides/best-rx-6800-xt-aftermarket-cards/ https://premiumbuilds.com/guides/best-rx-6800-xt-aftermarket-cards/#respond Thu, 31 Dec 2020 12:46:35 +0000 https://premiumbuilds.com/?p=802078 AMD has risen to the challenge posed by Nvidias’ RTX 3080 with the Radeon RX 6800 XT. This powerful GPU uses the latest RDNA2 architecture to bring true 4K performance to the table, and challenge Nvidia’s top gaming graphics card at the $700-$800 price point. Using a 7nm core, and pairing it with 16Gb VRAM… Read More »5 Best RX 6800 XT Aftermarket Cards for 2021

The post 5 Best RX 6800 XT Aftermarket Cards for 2021 appeared first on PremiumBuilds.

]]>
best-rx-6800-xt-aftermarket-cards

AMD has risen to the challenge posed by Nvidias’ RTX 3080 with the Radeon RX 6800 XT. This powerful GPU uses the latest RDNA2 architecture to bring true 4K performance to the table, and challenge Nvidia’s top gaming graphics card at the $700-$800 price point. Using a 7nm core, and pairing it with 16Gb VRAM this card is well placed to see you enjoying top performance for years to come. AMD has used a number of neat features to boost performance, such as ‘Infinity Cache’ to speed memory access for frequently used data, and ‘smart access memory’ on compatible hardware to allow for higher performance.

All RX 6800 XT’s boast the same core specifications, like 16Gb of VRAM rated at 16Gbps, and 4608 Stream Processors. There’s also hardware ray tracing support with 72 ray tracing accelerator cores, although performance is still a little way behind Nvidias Ampere GPUs.  The RX 6800 XT draws 300 Watts and a 750W power supply is recommended. The cards are also physically large with 3 fan, 3 slot designs prominent, so make sure you’ve got enough power supply and case space to fit them. They offer an excellent alternative to the Ampere GPUs with top tier performance at 1440p and 4k resolutions. You can be sure they will offer a premium gaming experience for years to come. 

Let’s look at some of the specific models and make some recommendations of the best RX 6800 XT aftermarket cards based on your priorities. 


Best RX 6800 XT Cards – Our Recommendations

Best Value RX 6800 XT Card

Sapphire RX 6800 XT Pulse

Whilst it’s not the cheapest RX 6800 XT on offer, we do consider the Sapphire Pulse to be the best value RX 6800 XT aftermarket card. It has a number of features that mark it out from the competition. It has a robust 3 fan cooler design with separate VRM and Memory cooling heatsinking, keeping temperatures low. There’s a dual BIOS switch that allows you to flick between a quiet low fan noise version for quiet operation, and a high performance overclocking mode. More than this you can flash the BIOSes individually, allowing you to experiment with upgraded BIOS revisions without risking the operation of the card for a while. Sapphire provides support and software for you to do this. It also has quick-change fans that swap out with a single screw and a special connector, meaning you can easily clean and service this card yourself and swap a fan should one fail. It’s no slouch in the performance department either with a boost clock of 2310MHz, on a par with some of the upper-tier cards. Sapphire has long since been primary board partners with AMD and their customer service and support is excellent. That’s an important consideration, and part of the reason we feel the Sapphire Pulse offers the best value amongst the 6800 XT’s. 


Quietest RX 6800 XT Card

Asus RX 6800 XT TUF Gaming

ASUS TUF Gaming RX 6800 XT. Asus has redesigned their TUF gaming cards for high-end GPUs, and this same cooler design appears on both the 6800 XT and the RTX 3080 where it has to cope with an additional 40W of heat loading. The Three fan design and thick fin stack have been praised for whisper-quiet operation even under load, whilst separated cool plates and heatsinking solutions for VRAM, VRMs and the GPU core itself ensure everything stays as cool as can be.  There’s a thick expanse of fins with seven heat pipes cooled by three fans. The fans are a special ‘linked blade’ design, with the centre one contra-rotating to reduce turbulence and thus noise. These move air more efficiently and so can turn slower, making less noise. Under idle conditions, they stop completely. There’s also a metal backplate and extensive reinforcement for the PCB to prevent GPU sag – though a bracket may be advisable with all of these cards as they are heavy. The boost clock remains high at up to 2340MHz so performance isn’t sacrificed. We recommend the TUF Gaming 6800 XT as one of the quietest models you can buy. 


Best Performance RX 6800 XT Card

Sapphire RX 6800 XT Nitro Plus

If you’re looking for the best performance amongst 6800 XT’s then the Sapphire RX 6800 XT Nitro+ is worthy of strong consideration. This GPU boasts one of the highest quoted boost clocks at 2360MHz showing Sapphires confidence in the quality of the GPU core they put into this card along with the supporting circuitry they add. Sapphire has made complete redesigns of a number of their cooler features for the Radeon 6000 series including the fan, heat sink fins, and thermal pads. Design features like an all-metal backplate, custom cooler design with separate VRAM and VRM cooling components, and that large 3 fan, 3 slot thick primary cooling stack ensure the card runs cool even when pushed to the limit. Dual BIOS allows you to switch between a quiet and performance mode but also lets you flash to a new BIOS for increased performance without the risk of rendering the GPU unusable. Sapphires ‘Trixx’ software also assists you in overclocking and tweaking the GPU for best performance, or you can use AMD’s own ‘Adrenaline’ software if you prefer. Overall the Sapphire Nitro+ is a top tier version of the card without the excessive expense and will let you extract the maximum performance out of the RDNA2 platform. 


Best Compact RX 6800 XT for SFF Builds

AMD RX 6800 XT

‘Compact’ is of course a relative term when no manufacturer offers a card with fewer than 3 fans, or even in a true 2 slot thick form factor. However, the Reference/Founders Edition Design, as made by Asus, Gigabyte, ASRock, and Sapphire is the most compact of the designs currently available at 267mm in length. There are still three cooling fans and a shroud that protrudes past the 2 slot backplate. The PCB is to reference specification meaning you will have a choice of custom water blocks to fit these GPUs if you wish. Beyond that being reference specification means you’re guaranteed the full RX 6800 XT experience as AMD intended it, with 300W power consumption, up to 2250MHz boost clock, 16GB GDDR6 VRAM and all the additional RDNA2 features such as ‘rage mode’ and hardware ray tracing. They also have a USB C port to the rear, two DisplayPort, and one HDMI port, whilst most partner cards do away with the USB C port. They’re also the cheapest models, so if you can find one at RRP they represent fantastic value. 


Best Enthusiast RX 6800 XT Card

XFX-Radeon-RX-6800-XT-Speedster-Merc-319

XFX is another long time board partner specialising in AMD GPUs. The Speedster 319 Merc is their high end 6800 XT offering and goes above and beyond the base specification in a number of areas. This GPU has a 14+2 Phase VRM, exceeding the 12+2 Phase specification of AMD, and uses high-quality components for a power delivery capacity equivalent to the PowerColor Red Devil – the most extreme version of the RX 6800 XT. There’s a number of other features like additional power filtering capacitors and advanced cooling solutions that exceed the basic specification which means higher stability and lower temperatures when overclocked. The comprehensive cooler has 7 heat pipes and a ‘through PCB’ cooling design to extract heat as efficiently as possible, along with an aluminum backplate to actively shed heat. The Speedster Mercury also has the normal high-end features of an AMD card such as a dual BIOS switch to easily enable ‘rage mode’ and zero fan mode for silent operation with the fans only spinning when the card is under load. It also ships with a 3-year warranty for your peace of mind. This Card has a solid basis for an enthusiast to explore the potential on offer in an RX 6800 XT with the confidence that the underlying hardware is well equipped to support overclocking.


RX 6800 XT card comparison chart

The post 5 Best RX 6800 XT Aftermarket Cards for 2021 appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/guides/best-rx-6800-xt-aftermarket-cards/feed/ 0 802078
RTX 3060 Ti vs 3070 vs 3080: Benchmark Comparison (Real World Tests) https://premiumbuilds.com/benchmarks/rtx-3060-ti-vs-3070-vs-3080-benchmark-comparison/ https://premiumbuilds.com/benchmarks/rtx-3060-ti-vs-3070-vs-3080-benchmark-comparison/#comments Mon, 21 Dec 2020 22:01:22 +0000 https://premiumbuilds.com/?p=801968 We’ve obtained examples of the RTX 3060 Ti, the RTX 3070, and the RTX 3080 and have spent the last two weeks running them through a suite of benchmark tests. In this article, we’ll present our results to you and conclude with some remarks as to what systems and usage cases these cards are best… Read More »RTX 3060 Ti vs 3070 vs 3080: Benchmark Comparison (Real World Tests)

The post RTX 3060 Ti vs 3070 vs 3080: Benchmark Comparison (Real World Tests) appeared first on PremiumBuilds.

]]>
RTX 3060 Ti vs 3070 vs 3080 Benchmark Comparison

We’ve obtained examples of the RTX 3060 Ti, the RTX 3070, and the RTX 3080 and have spent the last two weeks running them through a suite of benchmark tests. In this article, we’ll present our results to you and conclude with some remarks as to what systems and usage cases these cards are best suited to. 

We’ve tested first-person shooters and AAA titles at 1080p, 1440p 1400p Ultrawide, and 4K. Our testing methodology differs from some other sites: We’ve chosen settings that show these games in their best light, and that’s not always ‘ultra’ across the board. We’ve tested most games here on high or a mix of high and ultra settings, allowing the GPUs to shine and giving you a better understanding of how you can expect them to perform in the real world. The settings remain consistent both within each test and across the resolutions.

From the results here, you will be able to tweak settings to obtain higher quality or higher framerates, whichever you prefer, but you can expect performance on a par with our numbers here. Where we identify a clear CPU bottleneck we’ve mentioned it (Hi Warzone!).

The test system is our Ryzen test bench with a Ryzen 7 5800X, 16GB of RAM running at 3600MHz Cl16, and a B550 motherboard.

So, let’s dig into the numbers. 

Ryzen RAM Speed Benchmark Performance Test

RTX 3060 Ti vs 3070 vs 3080: Synthetic Tests

3DMark

Firstly, a quick look at some synthetic benchmarks helps us verify our cards are performing correctly, and get an idea of where they stack up overall. We’re just looking at the GPU scores here. We’ve included some popular GPUs of the last generation so you can see how they compare.

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 3DMark

Fire Strike uses DX11 which is an older API now, and we get an idea of how closely matched the GTX 1080 Ti and RX 5700XT are, and in this case the RTX 3060 Ti as well.

Time Spy uses The DX12 API and is more representative of current games, here the RTX 3060 Ti fares better but it’s clear the RTX 3080 still has a big lead.

Finally, we can take a looking ‘Port Royal’ helps us assess the relative Ray tracing capabilities of the RTX-enabled GPUs. This test uses DX12’s Ray tracing commands to render out a complex scene full of shadows, reflections, and lighting sources over and above the traditionally rasterised rendering of the core 3D scene. Despite using the last generation RTX cores the RTX 2080 Ti acquits itself well here, and there’s a fair gap between the performance of the 3060 Ti, RTX 3070, and RTX 3080 commensurate with their RTX core counts and overall rendering ability. The RTX 2060 brings up the rear, the least powerful card with RTX capabilities.


RTX 3060 Ti vs 3070 vs 3080: Gaming Benchmarks

1080p Performance

1080p FPS Gaming

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 1080p FPS Gaming

Doom eternal is really well optimised and shows good scaling with GPU power, everything from the RX 5700XT up exceeding 240FPS.

In Call of Duty Warzone, on high settings, we see all the Ampere cards exceed 200FPS average but you’re not gaining a huge amount from the additional cost and power of the RTX 3080. I think we’re seeing the ceiling of the 5800X’s performance here although lows don’t suffer like the can in some titles when CPU bound. There was a hard limit of 250FPS maximum frame rate in all the Ampere GPUs.

Rainbow 6 Siege shows good 1080p scaling but even an entry-level GPU is capable of fairly insane framerates at 1080p in this game.

1080p AAA Gaming

RTX-3060-Ti-vs-RTX-3070-vs-RTX-3080-Benchmarks-1080p-AAA-Titles

As for the AAA Titles, In Shadow of the Tomb Raider we’re seeing all the GPUs exceed 144FPS average at 1080p with the 3080 nearing 200FPS. Of note here is that the 3070 was 78% GPU bound, and the 3080 was only 50% GPU bound: Even with a Ryzen 5800X, GPU performance is being left on the table in this title with the higher performing Ampere GPUs in this game at 1080p. 

In Red Dead Redemption 2, a more demanding title, the high settings load GPU’s even at 1080p. The 2080Ti and 3080 are the only GPUs to exceed 100FPS average, but in this title we’re really looking for a consistent 60FPS at high settings and everything from a 1080ti and 5700XT upwards is capable of that here. Like Doom eternal, the Vulkan API’s help the 5700XT perform – AMD cards respond well to this API.

1080p Flight Simulator

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks Flight Simulator

And finally in Flight Sim 2020, 1080p means the CPU performance is as exposed as GPU  performance, we see minor performance improvements from the higher tier cards but even the 3080 only just hits 60FPS: That’s a function of the demanding nature of this test, flying over New York City at low level. As you’d expect, at 1080p all of these GPUs perform very well and we do see utilisation drop below 100% on occasions as they wait for the CPU to process the game data. Ultra settings sees performance drop about 10FPS across the board.


1440p Performance

1440P FPS Shooters

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 1440p FPS Shooters

This resolution is a much better match for these GPUs, and Doom Eternal is really well optimised letting the higher-end hardware shine. Again we see the power of the 3080 compared to the 2080 Ti, and Nvidias claims re the 3070 matching the 2080Ti ring true. Doom has a very well refined settings system: Each graphics preset adds or removes about 10FPS so you can fine-tune the game to your liking but all of these GPUs give a fluid, exciting experience at 1440p and over 100FPS, even the 1660 Super hits 90FPS average. It’s still very enjoyable but you might want to reduce settings a notch or two. 

Rainbow Six Siege shows excellent scaling with the more powerful GPUs, and the RTX 3080 really stretches its legs to deliver a 400FPS average: Even the 3060 Ti exceeds 260FPPS, and we’re well into diminishing returns since we’ve well exceeded even a fast monitors refresh rate and generally this is a title that would be played at 1080p. Note the 2060KO and below are run on a Ryzen 3600 System: That likely accounts for some of the performance difference here, but it’s not a huge difference.

Call of Duty: Warzone we still see them all exceed 144FPS average at high settings. The RTX 2080ti marginally outperforms the Ampere cards here, which could be down to it being an established card with optimised drivers whilst the Ampere cards are newer – but note they don’t scale well. An RTX 3080 isn’t getting you substantially more performance in this title over the much cheaper 3060 Ti.

1440P AAA Titles

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 1440p AAA Titles

Moving on to Triple-A titles, Red Dead Redemption can make the most powerful graphics cards sweat, and we have turned up settings here. That’s for a couple of reasons, firstly because the game should be experienced in all its glory at higher settings, and secondly, because it shows the significant break in performance between the cards that can maintain 60FPS average, and those that can’t. Last generation cards struggle, whilst the 3060 Ti comfortably maintains nearly 70FPS, and the RTX 3070, RTX 2080 Ti, and RTX 3080 all approach 100FPS average. Again, you can tweak settings to get the performance you’re happy with as they are turned right up here, but those GPUs are capable of providing an excellent experience at 1440p.

Looking at Shadow of the Tomb raider, again this title shows where the GTX 1660 Super begins to struggle, although it is still capable of running well optimised titles like this at 1440p. The scaling with the more powerful GPUs is clear with all of the Ampere GPUs exceeding 100FPS average. This is at ‘highest’ settings and we can see that the 3080 is the card that manages to exceed 144FPS average at 1440p in this title with settings cranked. 

In Flight Simulator at 1440p, we’re now becoming much more GPU bound. 50-60FPS average at high-end settings really is a good performance in this test, and it looks utterly gorgeous doing it. All Ampere GPUs exceed 30FPS at all times. 1440p is where Flight Sim 2020 really begins to shine, and the RTX 3060 Ti, in particular, impressed me with its performance here. Lifting settings to ‘Ultra’ adds some nice visual tweaks but still costs about 10FPS across the board. Again, there’s room to tweak here with just a couple of detail settings turned down from ultra getting you most of the performance benefits at almost no detriment to visuals. 

1440p Conclusion

We can see that these cards really begin to come alive at 1440p, significantly outpacing all but the highest-end options from the last generation. It was the 3060 Ti that impressed me most, with performance very much in the same ballpark as it’s bigger siblings. Once you’re up to 100FPS in AAA titles at high settings, you really are getting what you paid for in games, so there’s not much more to be asked for the 3060 Ti. The RTX 3070 also excels at this resolution. 


1440p Ultrawide Gaming

Moving on to ultrawide 1440p, this resolution bridges the gap to 4K and is becoming increasingly popular. We particularly like the versatility of workspace it offers, combined with the immersion in gaming.

1440p Ultrawide FPS Gaming

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 1440p Ultrawide FPS Gaming

The first person shooters still record high FPS if they’re well optimised, Rainbow 6 Siege is generating over 250FPS in high settings. COD:Warzone really does seem to be CPU limited – FPS doesn’t really drop much from 1440p so you’re at no penalty opting for ultrawide. Doom Eternal is still generating well over 144FPS on everything except the 1080ti, and we’re at ultra nightmare settings here.

1440p Ultrawide AAA Titles

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 1440p Ultrawide AAA Titles

Moving on to AAA Titles, we’re exceeding 60FPS in Red Dead Redemption 2 on all the Ampere cards at demanding settings. The 3060ti and 3070 are again relatively Close in performance with 10FPS between them, with only the 3080 distinguishing itself 20FPS higher. Shadow of the Tomb Raider is much the same pattern, but with slightly higher framerates. The 3070, 3080, and 2080 Ti all approach or exceed 100FPS. 

And finally, at 1440p Ultrawide, we see the same scaling in Flight Simulator 2020 – But all of these GPUs are approaching or exceeding 50FPS even at this resolution which is impressive, and again you can see you’re paying a great deal in going from an RTX 3060 Ti to an RTX 3070 and on to an RTX 3080 to go from 46 to 60 FPS.

The takeaways here are really how closely the RTX 3070 matches the RTX 2080 Ti so Nvidias claims ring true there. Also, note that the hierarchy of cards is now well established but the 3060Ti really isn’t a bit-part player here: It’s very much keeping pace and in any of these titles you’re not going to see or feel a difference in performance between say 160 FPS and 190Fps, or in just a couple of settings turned down to see it match the performance of the RTX 3070. The 3070 is clearly excellent in 1440p ultrawide, but the 3060ti is more than capable as well. The 3080 holds it’s clear lead.


4K Performance

4K FPS Shooters

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 4K FPS Shooters

Moving on to 4K, we’re now at a resolution that can make these GPUs really work hard. Looking at the shooters, note that we’re still exceeding 100FPS in Doom Eternal and CoD Warzone – and we’re still at high settings here. Likewise, in Rainbow6 Siege we’re exceeding 144FPS meaning you’ll be maximising a high performance 4K monitor, not that this is an ideal set up for competitive play. The RTX 3080 is still pushing past 300FPS, but the 3060Ti and 3070 are much lower here beaten even by the GTX 1080ti and I confirmed this with multiple runs. I can’t confirm exactly why this is.

4K AAA Titles

RTX 3060 Ti vs RTX 3070 vs RTX 3080 Benchmarks 4K Gaming AAA Titles

AAA titles are where you can revel in the detail rendering at 4K gives – but we’re starting to see all of these GPUs struggling to make the magic 60FPS. You’re going to have to turn down settings a little from those we’ve used throughout these benchmarks. Again we see the 3070 shadowing the 2080 Ti closely although the 2080 Ti does come out on top – just – in these three tests. The 3080 has a clear advantage, and if you’re building a 4K focussed gaming PC then it’s really between the RTX 3080 and the RX 6800 XT as to which GPU is right for you.


Conclusions

The RTX 3060 Ti

Firstly, the RTX 3060 Ti was the standout card of this test. It performs excellently, to the point where it’s close to the RTX 3070 in most of these titles at 1440p. In well optimised games and first-person shooters, it’s producing fluid responsive gameplay at high FPS and high settings. In demanding titles, it’s punching well above its weight. The RTX 3060 Ti a great card, and if you’re looking to trade off components in your system to afford a better CPU, monitor, or SSD, you should absolutely consider the 3060 Ti. It won’t feel like a compromised choice at all.

Ampere Gaming at 1080p

Secondly, none of the Ampere cards achieve their potential at 1080p. They’re either exposing CPU limitation in very high FPS titles or else not shining in AAA titles restricted by a lower pixel count. You can make a case for the RTX 3060 Ti in either a very high FPS esports build, or in a PC aimed at playing AAA titles at 1080p and very high settings: But really, We’d recommend that if you’re considering any of these Qmpere cards you start your search with a 1440p Monitor capable of 144Hz and adaptive sync. That’s where you start getting your money’s worth out of these GPUs. 

The RTX 3080

Finally, of course, there’s the RTX 3080. This GPU is an absolute monster. It cleanly wins every benchmark here, and whilst the 3070 has clearly been pegged to the RTX 2080 Ti, and the RTX 3060 Ti massaged to ensure it’s 10% or so slower than that, the RTX 3080 is allowed to stretch its legs and hit the limits of its’ capability. That really comes down to power restrictions but We’ll dig into that more with a specific look at the RTX 3080 in a future article.

Watch out for bottlenecking in aged systems

One note of caution here is if you pair these GPUs with an aging system, they’re all-powerful enough to expose weakness, particularly in CPUs. You may find that whilst you can run at higher settings, you begin to experience stutter and frame time inconsistencies if you hit the limit of your CPUs performance. That’s something we want to investigate and we will be running through some tests with a Ryzen 5 3600 and Intel CPUs from the last-generation to see if they negatively impact performance at all, or if they’re sufficient to maximise the potential of these GPUs. 


Our Recommendations

Best Ampere Card for 1080p FPS Gaming / 1440p All-rounder Builds: RTX 3060 Ti

Nvidia RTX 3060 Ti Founders Edition

Based on our results, realistically we’d recommend the RTX 3060 Ti as the high FPS esports GPU at 1080p, or for a 1440p all-round gaming machine.

Related: RTX 3060 Ti Aftermarket Card Overview
Related: RTX 3060 Ti Aftermarket Card Database


Best Ampere Card for 1440p AAA Gaming: RTX 3070 / 3060 Ti

Nvidia RTX 3070 Founders Edition

Stepping up in to solid 1440p AAA title performance the RTX 3070 makes a strong case for itself, but cross-shop the RTX 3060 Ti if you’re working to a budget – you won’t feel short-changed with its performance and if it allows you to purchase a high-performance CPU as well then it’s the right choice to make.


Best Ampere Card for 1440p Ultrawide Gaming: RTX 3070 / 3080

Nvidia RTX 3070 Founders Edition

At 1440p Ultrawide, the RTX 3070 shines but the additional cost of a RTX 3080 begins to make sense to provide a really remarkable gaming experience. 

Related: Best RTX 3070 Aftermarket Cards
Related: RTX 3070 Aftermarket Card Overview
Related: RTX 3070 Aftermarket Card Database


Best Ampere Card for 4K Gaming: RTX 3080

Nvidia RTX 3080 Founders Edition

And finally, at 4K, it’s such a demanding resolution that in high-end titles like Red Dead Redemption or Cyberpunk you will need to lower settings from Ultra to maintain 60FPS performance even on an RTX 3080. There is one ace up the Ampere GPUs sleeve though: DLSS. This technology form Nvidia uses a trained neural network and AI to up-sample the rendered image for display: when it works well it allows you to combine heightened settings with higher frame rates. However, just a handful of titles are using it to it’s full potential, so it’s something we’re looking into and will deliver our impressions on it later.. 

Related: Best RTX 3080 Aftermarket Cards
Related: RTX 3080 Aftermarket Card Overview

I hope you’ve found this roundup of the Ampere GPUs useful: pair any one of these cards with a well-matched monitor (linked below) and you’ll have an absolutely fantastic set up to enjoy the latest games as they are meant to be played.

The post RTX 3060 Ti vs 3070 vs 3080: Benchmark Comparison (Real World Tests) appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/benchmarks/rtx-3060-ti-vs-3070-vs-3080-benchmark-comparison/feed/ 1 801968