Sat, Feb 28, 2015, 12:44 AM EST - U.S. Markets closed

Recent

% | $
Quotes you view appear here for quick access.

NVIDIA Corporation Message Board

  • drub0y77 drub0y77 Jan 19, 2010 5:21 PM Flag

    AMD Accuses Nvidia of Disabling Multi-Core CPU Support in PhysX API.

    http://www.xbitlabs.com/news/multimedia/display/20100119134616_AMD_Accuses_Nvidia_of_Disabling_Multi_Core_CPU_Support_in_PhysX_API.html

    Hahah, it's their friggin' API idiots, they can do whatever they want with it. That's why they BOUGHT it and have invested in it heavily.

    That said, I do think they should look to provide a licensed version of PhysX that uses OpenCL/DirectXCompute for it's internal implementation. More money to be made.

    SortNewest  |  Oldest  |  Most Replied Expand all replies
    • btw, I want to add to the above that if ATI refused to allow PhysX via CUDA to be ported to run on their GPUs for free... it's a foregone conclusion that they wouldn't pay any $$ to license it.

      ATI will only support it when NVidia no longer owns it.

    • Yeah, I read that article before,

      " Nvidia has publicly said that it will work with ATI on CUDA-accelerated PhysX for over a year, but it’s an offer that ATI has yet to accept."

      1) if the world goes to OPenCL, then NVidia could port PhysX to it. 'IF'

      2) porting to OpenCL removes NVidia from the main driver's seat to lobbying some committee about the future & direction of their own heavily $$ paid for technology.

    • Yeah, nVidia cards are hands down the performance leader when it comes to general compute tasks vs. just plain old graphics... and Fermi is only going to extend that lead.

      FYI, I found this article from a while back that had nVidia's directory of product management discussing the future of PhysX and moving to OpenCL, so I'm sure it's in the works if not already done consider this article is from March of 2009.

      http://www.bit-tech.net/news/hardware/2009/03/27/nvidia-considers-porting-physx-to-opencl/1

    • I read somewhere that the performance penalty associated with PhysX (in terms of framerate) was -30% on nV cards, but -60% on ATI cards. These are averaged numbers not including GF100 products, of course.

    • SInce you use this stuff... if you code in CUDA, will it run only on an NVIdia GPU ??

      If you develop a CUDA app, but the end user installs an AMD GPU, does the CUDA code then run on the CPU ?? or not run at all ?

      btw, I would guess that if NVidia did license for $$ PhysX... IMO, I think after tons of negative press, NVidia would probably end up allowing it to be used for free.

      So... maybe after a generation or two, with NVidia distancing itself from the competition, with it being very unlikely they could catch up... it might then make it available for free.

      We all have opinions on what's best for NVidia (i.e. x86 development, PhyxS licensing). We're just gonna have to have faith that Jensen will do what's best at the time.

    • Yup, I understand, but then they're damned if they do and damned if they don't.

      They could also make it so that you have to pay a licensing fee (nothing astronomical mind you) or just put "The way it's meant to be played" logo on your app and they'll wave the fee.

      Plenty of ways to skin the cat, but they have to do something. :)

    • I see it thru your perspective as well. But, if NVidia would decide to license PhysX for $$... it just might make an entire industry revolt.

      SO, do consider that idea could have some very severe repercussions. Anywhere from developers openly not using it, or the entire industry develop an open physics standard very quickly excluding NVidia.

    • Thanks. It's good to hear from someone who understands this side of the industry well.

      <<We're starting to see nVidia become more of a software company every day.>>

      Yes, exactly. Beyond the barriers to entry on the hardware side ($1B in R&D + rarified talent), NVDA is putting huge industry beating roadblocks up on the software side. Graphics isn't just hardware, it's HW + SW. Jensen is all about leveraging his work, and all these other businesses you mention all have significant SW investments. It's called building a franchise and it's hugely defensible.

      This is why
      - Intel has such a hard road ahead if they want to seriously play in grx
      - AMD will lose share over time as they are not putting in equivalent investments (and they don't view graphics as their core business)
      - And the hand-held/SoC guys (QCOM, TI, MRVL etc) are going to be scratching their heads a few years from now.

      I love NVDA's position for the long term.

    • Sure, I totally can see it from that perspective as well. I just so happen to think the $$ would be worth it because they can market games based on PhysX as part of their "way it's meant to be played" campaign which only helps their hardware sales in the end.

    • one more thing I forgot to mention to my above post:

      YOu mentioned about developers making an additional CUDA specific code path. I heard Jensen talk about one of NV30s failures and that was something to do with they had to compile specifically for nv30 to get its advantage... guess what, no one did. Developers coded it once to run on all platforms and nv30 suffered. It's a lesson nvidia will never forget and I doubt they would push for developers coding it twice.

      the above is from an interview, or cc... I really don't remember. Wish I could give you more tech details, but in this case, just reiterating what I heard.

    • View More Messages
 
NVDA
22.06-0.1400(-0.63%)Feb 27 4:00 PMEST

Trending Tickers

i
Trending Tickers features significant U.S. stocks showing the most dramatic increase in user interest in Yahoo Finance in the previous hour over historic norms. The list is limited to those equities which trade at least 100,000 shares on an average day and have a market cap of more than $300 million.