% | $
Quotes you view appear here for quick access.

Advanced Micro Devices, Inc. Message Board

dr_max_facts 55 posts  |  Last Activity: Jul 26, 2015 10:57 AM Member since: Jan 24, 2002
SortNewest  |  Oldest  |  Highest Rated Expand all messages
  • dr_max_facts dr_max_facts Jul 26, 2015 10:57 AM Flag

    Expect around 50% or more performance over the already fast GTX 980 Ti, which will see NVIDIA easily dominate AMD's Radeon R9 Fury X. But where does this leave AMD? Right now, AMD is in dire need of a huge architectural change, as Fiji didn't really bring anything new to the table. All AMD has done is used HBM1, but it's benefits weren't really shown on Fury X, apart from the card being smaller than usual. NVIDIA is really going to leapfrog AMD next year with its triple-punch in 16nm + Pascal + HBM2.

  • All eyes have turned to NVIDIA in anticipation of their latest and greatest, next generation GP100 GPU, Pascal.
    NVIDIA’s Pascal GPUs are set to make use of massive 4096-bit memory busses.

    Moreover, NVIDIA will also make use of HBM memory in their latest flagship series, only they’ll be adopting the second generation of HBM, HBM2.

    HBM2 promises significant performance as well as capacity enhancements over HBM and will allow NVIDIA to stack DRAM dies 4-Hi on their consumer-level GP100 GPUs. Their industrial-grade, professional Quadro and TESLA GPUs will receive 8-Hi stacks.

    Keep in mind that with 4-Hi HBM2 running at 1GHz on a 4096-bit memory bus, Pascal is set to have an absurd throughput – we’re talking the same levels of bandwidth Jean-Luc Picard had to make use of on his Sovereign-class USS Enterprise starship.

    It’s why NVIDIA’s roadmap has Pascal, in comparison to older technology, in a league of its own.

    Throw in Pascal’s TSMC 16nm FinFET process, allowing for significantly denser and more power efficient GPUs than was possible on Maxwell 2’s 28nm process, and you have a heck of a lot of potential under the GP100’s hood.

    And just because that’s not nearly enough for NVIDIA, Pascal will sport their newly developed NVLink technology, allowing for 5 to 12x faster transfer speeds than traditional PCIE 3.0 between the GPU and the rest of the PC.

    We’re not sure about you but Pascal is fantastically exciting.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jul 18, 2015 11:09 PM Flag

    AMD's debt laden balls are rolling down the inverted 3d cone headed for BK reorg.

    AMD's upcoming Fiji GPU will be sporting HBM1, but Nvidia is skipping limited to only 4GB HBM1, during Nvidia's GTX presentation, Nvidia CEO Jen-Hsun Huang claimed the Low Yields Of Fiji GPUs Causing AMD To Suffer Radeon R9 Fury Stock Shortages. AMD is smearing its high demand lipstick on low yielding HBM1 Piggy trying to make her look like something she is not.

    AMD Doomed

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jul 16, 2015 8:07 PM Flag

    NVLink is an energy-efficient, high-bandwidth communications channel that uses up to three times less energy to move data on the node at speeds 5-12 times conventional PCIe Gen3 x16. First available in the NVIDIA Pascal GPU architecture, NVLink enables fast communication between the CPU and the GPU, or between multiple GPUs. Figure 3: NVLink is a key building block in the compute node of Summit and Sierra supercomputers.

    VOLTA GPU Featuring NVLINK and Stacked Memory NVLINK GPU high speed interconnect 80-200 GB/s 3D Stacked Memory 4x Higher Bandwidth (~1 TB/s) 3x Larger Capacity 4x More Energy Efficient per bit.

    NVLink is a key technology in Summit’s and Sierra’s server node architecture, enabling IBM POWER CPUs and NVIDIA GPUs to access each other’s memory fast and seamlessly. From a programmer’s perspective, NVLink erases the visible distinctions of data separately attached to the CPU and the GPU by “merging” the memory systems of the CPU and the GPU with a high-speed interconnect. Because both CPU and GPU have their own memory controllers, the underlying memory systems can be optimized differently (the GPU’s for bandwidth, the CPU’s for latency) while still presenting as a unified memory system to both processors. NVLink offers two distinct benefits for HPC customers. First, it delivers improved application performance, simply by virtue of greatly increased bandwidth between elements of the node. Second, NVLink with Unified Memory technology allows developers to write code much more seamlessly and still achieve high performance.

  • dr_max_facts dr_max_facts Jul 16, 2015 6:11 PM Flag

    Pascal + 16nm FinFET + NVLink + HBM2 = AMD DOOMED

    The world’s No. 1 producer of discrete graphics processors will reportedly use one of Taiwan Semiconductor Manufacturing Co.’s 16nm FinFET fabrication technology to make its “Big Pascal” GPU. Given the timeframe of the tape-out, it is highly likely that Nvidia uses TSMC’s advanced 16nm FinFET+ (CLN16FF+) manufacturing technology. According to the post, the BP100 is Nvidia’s first 16nm FinFET chip and the company has changed its approach to roll-out of new architectures. Instead of starting from simple GPUs and introducing biggest processors quarters after the initial chips, Nvidia will begin to roll-out “Pascal” with the largest chip in the family.

    Nvidia’s “Pascal” architecture represents a big leap for the company. Thanks to all-new architecture, the Nvidia’s next-gen GPUs will support many new features introduced by DirectX 12+, Vulkan and OpenCL application programming interfaces. The 16nm FinFET process technology will let Nvidia engineers to integrate considerably more stream processors and other execution units compared to today’s GPUs, significantly increasing overall performance. In addition, next-generation graphics processing units from Nvidia will support second-generation stacked high-bandwidth memory (HBM2). The HBM2 will let Nvidia and its partners build graphics boards with 16GB – 32GB of onboard memory and 820GB/s – 1TB/s of bandwidth. For high-performance computing (HPC) applications, the “Big Pascal” chip will integrate NVLink interconnection tech with 80GB/s or higher bandwidth, which will significantly increase performance of “Pascal”-based Tesla accelerators in supercomputers. Moreover, NVLink could bring major improvements to multi-GPU technologies thanks to massive bandwidth for inter-GPU communications.

    Sentiment: Strong Sell

  • Reply to


    by instrinsic_turd Jul 6, 2015 7:33 PM
    dr_max_facts dr_max_facts Jul 8, 2015 11:55 AM Flag

    Brilliant synopsis of castrated AMD's blundering downward spiral to Doom.

    "Real men have fabs." AMD founder Jerry Sanders, Hector turned AMD into a blundering stumbling Caitlyn while doing the Mexican Hat Dance all the way to the bank.

    Fusion was a debt dooming delusion, Barcelona Phenom a watt sucking Peon, Bulldozer a watt wasting IPC cripple BSdozer, Seasickmicro a cash burning fiasco. Debt laden AMD's pea brained CEOs fought on too many fronts with a peashooter and were blow into the stone age.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jul 1, 2015 5:21 PM Flag

    Fury X review unit does have one rather obvious drawback. Whenever it's powered on, whether busy or idle, the card emits a constant, high-pitched whine. It's not the usual burble of pump noise,
    the whoosh of a fan, or the irregular chatter of coil whine—just an unceasing squeal like an old CRT display might emit. The sound comes from the card proper, not from the radiator or fan. An informal survey of other reviewers suggests our card may not be alone in emitting this noise.

    PC Perspective ordered up a couple of retail cards from Newegg and test it themselves.

    Their report on the results is kind of science-y, replete with frequency analysis plots and such, but you don't need the graphs to understand the basic problem. The current batch of retail Fury X cards does not include an effective fix for the problem we found. They still whine. Worse yet, the retail cards sometimes seem to make other noises that the review samples do not, including apparent buzzing sounds from the pumps.

    I will say, though, that the microphones and speakers involved in recording and playing back this sound don't seem to capture entirely the frustrating character of the high-pitched whine. Something about it puts my monkey brain on edge, as if a teeny, tiny version of nails on a chalkboard were playing constantly in the background.

    In addition to the PC Perspective report, a number of videos have popped up on YouTube in recent days showing off Fury X pump noise, including the whine but also rougher sounds. I hesitated to post about them since the extra noise didn't track with our experience, but it all seems to fit with PC Perspective's findings.

    But it does appear AMD's statement about the Fury X noise problem being fixed and "not an issue" was more wishful thinking than an accurate assessment of the current situation. Caveat emptor.

    Sentiment: Strong Sell

  • NO 28nm GPU NEEDS HBM1, HBM1 Bandwidth is USELESS and Wasted on 28nm GPUs, Fury would of done much better without its measly 4GB of stuttering HBM1 in for 4K gaming.

    Its laughable how you AMDroids pump a compact design that saves space where space doesn't need to be saved BUT NEEDS a MASSIVE Water Cooled Radiator and be clocked to within inch of its life to even approach the performance of Reference Air Cooled Maxwell 980Ti, Nvidia's many partners will trash, bash and thrash AMD's over hyped noisy Coil/Pump whining Fiji with highly overclocked Air Cooled 980Ti that can be over clocked to 1450/1500 MHz and beyond.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 26, 2015 2:35 PM Flag

    Pascal is taped out with HBM2 NOT 4K lame gaming limited to 4GB only HBM Fury.

    Pascal is coming EARLY next year Q1 from what I hear.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 26, 2015 11:56 AM Flag

    6GB Maxwell 980Ti is BUILT for 4K "built for 4K gaming?" AMD's watt sucking, micro stuttering Fury even when attached to a massive water cooled radiator is 4K gaming Lame due to only having a measly 4 GB of memory.

    Performance Per Watt Leader Maxwell 6GB 980Ti is 4K Gaming Future Proof, AMD 4GB Fury is NOT.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 26, 2015 11:47 AM Flag

    Bidness Etc still regards Nvidia’s 980 Ti as being better in terms of performance.

    Nvidia’s 980 Ti is an overall better card in terms of performance, since it has an astonishing 7 times more effective memory clock speed, and has 6GB RAM as compared to the 4GB in Fuxy X. This allows it to perform much better on 4K resolution. The Nvidia’s performance advantage over the Fury X is also due to its 32 more render output processors.

    AMD has bettered its latest Fury X in terms of power consumption and heat. The liquid cooling system incorporated, has resulted in the tech giant keeping the GPU temperature down to a remarkable 60 degrees Celsius, whereas Nvidia’s cards touch temperatures are up to 80 degrees and higher.

    However there is still one problem that undermines the Fury X, high time frame variance, something which has been overlooked in test results.

    “If you dig deeper using our frame-time-focused performance metrics—or just flip over to the 99th-percentile scatter plot above—you'll find that the Fury X struggles to live up to its considerable potential. Unfortunate slowdowns in games like The Witcher 3 and Far Cry 4 drag the Fury X's overall score below that of the less expensive GeForce GTX 980. What's important to note in this context is that these scores aren't just numbers. They mean that you'll generally experience smoother gameplay in 4K with a $499 GeForce GTX 980 than with a $649 Fury X. Our seat-of-the-pants impressions while play-testing confirm it.

    Unless AMD caters to the frame variance issues of its Fury X through driver updates, there will still be a considerable difference in performance between it and Nvidia’s 980 Ti, which it was built to compete against, especially when both cards are priced at $650.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 3:25 PM Flag

    GTX 980 Ti offers better value for money. The two cards have price parity but Nvidia's is faster and significantly better at overclocking, too. With pre-overclocked versions readily available, the performance difference will be even higher. There's some argument for UK customers to opt for the Fury X at 1440p, especially if you're really concerned about card length or noise. That said, third-party coolers for Nvidia's hardware are often very quiet too, and many mini-ITX cases can still house long cards and you won't need to mount a radiator.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 2:48 PM Flag

    Slower than expected in sub-4K resolutions
    Pump emits permanent high-pitched whine
    Some coil noise
    Could be much quieter in idle
    4 GB of VRAM
    Lack of HDMI 2.0
    No memory overclocking
    Radiator takes up extra space
    No DVI / analog VGA outputs

    AMD's Fury X comes with "only" 4 GB of HBM memory, which is a technological limitation, there are no bigger HBM chips available at this time. Forums will be full of "don't buy Fury X, 4 GB VRAM is not enough" posts, that might affect the buying decisions of consumer-level gamers. On the other hand, 6 GB on the GTX 980 Ti is more, no matter how you look at it, which in some way is more future-proof because you would be immune against these odd titles with exceedingly large VRAM usage.

    However, I feel like the watercooler isn't the ideal implementation, the pump emits an annoying high-pitched whine, that definitely creates more frustration than the fan, which is running very quiet. In addition to that, the card exhibits some coil noise, depending on the game and framerate, but it's definitely there, and combined the two effectively overpower the fan's noise levels. Idle fan noise could also be lower, and in Zero-core neither the fan nor the pump turn off completely. Recent NVIDIA cards introduced an idle-fan off mechanism, which is also missing on the Radeon.

    Overclocking potential of the card is slim, and memory overclocking has been disabled completely. What I am more concerned about is the limited GPU overclocking potential. The GM200 GPU on GeForce GTX 980 Ti and Titan X overclocks much better, which means that with both cards overclocked to the max, GTX 980 Ti will have a large performance lead over an overclocked Fury X.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 2:31 PM Flag

    AMD propaganda pumped micro stuttering Quantum is 4K GAMING OBSOLETE, VRAM DOES NOT Stack.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 1:58 PM Flag

    Yes Hei4me you're one of the Best FUD Busting Fact Finders, FUBAR Fury with Limited HBM1 is NO Future Proofing 4K Gaming Solution, its a watt sucking, micro stuttering, waste of money.

    FUBAR Fury with its Limited HBM1 is not a 4K gaming worthy Flagship, its just more of AMD's propaganda pumping BS that AGAIN Proves AMD NEVER WALKS Their Hype Pumping Talk!

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 1:06 PM Flag

    AMD’s Joe Macri wax poetic about the Fury X’s overclocking potential, it’s majorly disappointing to see it fail so hard on that front = Proves AGAIN that AMD NEVER Walks its propaganda pumping Talk.

    After hearing about HBM’s lofty technical numbers for months, it’s disappointing to see little to no pure gaming benefits from all that bandwidth. After seeing the tech specs and hearing AMD’s Joe Macri wax poetic about the Fury X’s overclocking potential, it’s majorly disappointing to see it fail so hard on that front, #$%$ silicon lottery draw or no. And while 6GB of RAM is still overkill for the vast majority of today’s games, it’s disappointing to see the Fury X limited to just 4GB of capacity when some of today’s games are starting to blow through that at the 4K resolution that AMD’s new flagship is designed for, as evidenced by our GTA V results.

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 12:45 PM Flag

    Yes my Fact Finding Friend, HardOCP's Big FUD Busting Ball is Bouncing on AMDroids hyper pumping chin.

    AMD's over hyped watt wasting FUBAR Fiji is just ANOTHER underwhelming Piece of Sheet that Proves AGAIN that AMD NEVER Walks its propaganda pumping Talk.

    So sorry lip flapping perma pumping AMDroids But its so Factually True.

    AMD's Cred is Dead Buried and soon forgotten, AMD RIP.

    Sentiment: Strong Sell

  • The Bottom Line

    The new AMD Fiji GPU and Fury X video card looks awesome on paper, but has underwhelmed and disappointed us when it comes to real world gameplay. The AMD Radeon R9 Fury X feels like a proof of concept for HBM technology.

    In terms of gaming performance, the AMD Radeon R9 Fury X seems like better competition for the GeForce GTX 980 4GB video card, rather than the GeForce GTX 980 Ti. GTX 980 cards are selling for as low at $490 today. This is not a good thing since the AMD Radeon R9 Fury X is priced at $649, the same price as the GeForce GTX 980 Ti.

    Usually trying to decide between two video cards at the same price point is a wash, with very even and split performance. However, this is not the case this time with the AMD Radeon R9 Fury X and GeForce GTX 980 Ti. There is a definite pattern that leads to one video card being the best value for the money, and it is GeForce GTX 980 Ti, not the AMD Radeon R9 Fury X.

    Limited VRAM for a flagship $649 video card, sub-par gaming performance for the price, and limited display support options with no HDMI 2.0 and no DVI port. To be honest, we aren't entirely sure who the AMD Radeon R9 Fury X is really built for? The AMD Radeon Fury X is a confusing product, like a technology demo not fully realized, a showcase for HBM only but with no real substance. The AMD Radeon Fury X looks to be a great marketing showcase, but its prowess starts waning when you consider its value to gamers and hardware enthusiasts.

    Sentiment: Strong Sell

  • You can compare GDDR5 capacity with VRAM capacity of HBM memory. If the GPU runs out of VRAM, what happens in both scenarios? It has to swap out of memory, no matter if it is GDDR5 or HBM, the result is the same.

    Who is Built for 4K?

    To make a video card that is built for 4K gaming some basic things need to happen. First, the GPU must be fast, it must be able to handle high resolutions and pump out the performance needed to push 4K resolution. In terms of pixels, 4K is 8,294,400 pixels. Compare that to 1440p's pixel mass of 3,686,400.

    Part of the GPU specification that helps push these pixels are known as the ROPs. NVIDIA scaled up its ROPs with the GeForce GTX 980 Ti and TITAN X up to 96 ROPs. AMD however did not scale up its ROPs compared to the 290X/390X, it is still at 64 ROPs.

    NVIDIA sought to up the GeForce GTX 980 Ti to 6GB and the TITAN X to 12GB. This is where a video card needs to be if you are aiming for a video card that is quote: "built for 4K." No current amount of memory bandwidth is going to overcome the physical limitation of VRAM.

    NVIDIA also made sure not only DisplayPort 1.2 is on board for 4K 60Hz support but also HDMI 2.0 which is needed for HDMI support of 4K at 60Hz. AMD removed the DVI connections, but then only gave us HDMI 1.4, with no 4K 60Hz support. If, in our opinion, you are going to remove I/O connection options then at least support the latest versions of each connection for the latest resolution and refresh rate support. In this case that would be DisplayPort 1.3 and HDMI 2.0, if only AMD had done that, it would have felt like a more capable 4K card for displays.

    Finally, performance must be there. Your video card must perform. So far, in our testing, the AMD Radeon Fury X trails performance of the GeForce GTX 980 Ti even though it is the same price. Add all these Facts up, and you tell us which video card is "built for 4K gaming?"

    Sentiment: Strong Sell

  • dr_max_facts dr_max_facts Jun 24, 2015 10:33 AM Flag

    The Fury X is, quite easily, the best GPU to come out of AMD. A 40 per cent performance improvement over R9 290X is nothing to sniff at But that, Really, is Not the real competition. Nvidia's pre-emptive launch of the GeForce GTX 980 Ti uses a more efficient core and regular GDDR5 memory to achieve benchmark performances that are, in our opinion, a little better than the latest Radeon's, perhaps helped in small part by having a larger framebuffer. Partner GTX 980 Ti's are faster still and overclock better than Fury X.

1.930.00(0.00%)Jul 31 4:00 PMEDT