O - I doubt we see 12's again if so I'd back the truck up and load up. I would be happy to see a pullback to mid 13's to recover the assigned block I will lose next week and that may not be likely barring an overall correction - which we are due for. Either way the stock is worth 18 based on potential revenue and is undervalued even at current revenue - technicals aside. The only thing that could be perceived as negative is their redefinition of the Tegra TAM - apparently automotives is going to be a larger driver of revenue than I thought in lieu of say high end smart phones. I know they won the next ford/ms contract for the sync apps processor so this does appear to be the case.
That is a whole lot of postulating and projection about unproven products from someone with a rather dated view on the industry. The last relevant technology you worked on went extinct with the mainframe and now we should listen to your B.S. why? Somehow, because you regurgitate the finding and opinions other ill informed techno wanna-be's you're an authority on this? Blah - I can google too!
However, I will surely recant everything I say if any of your predictions come true but until then take your mouth off of the intel HOG and let it play out! Coming on here BS'ing WON'T improve market conditions for Intel any. BTW: RT isn't dead - all of 8 is dead. Nobody wants it...especially for mobile.
Rumor - Nvidia is partnering with MS and will be providing the app processor for their newest SYNC platform. This reaffirms, I think, the ongoing relationship nvidia has with MS and expect more T4's in other upcoming MS branded products. This is contrary to how I originally viewed their relationship lately.
The smart thing to do is to roll those 13's to 14's for May - so on and so forth. If you patient and time it right, watching the price action, you can essentially buy and sell for a virtual wash but at a $1 with more time value. That's what I will be doing but I will let more of the remaining time value expire during next week - oh and anybody selling options on this stock should never risk more than half of their position. :-)
Then $200-$249 it is...assuming a $150-$175 BOM and assembly costs . However, that adds even more mystery as to how they will bill back the T4 component - although if Shield and Tegra all fall within the same vertical it may not matter as much.
Honestly, I can't see it being anymore than $400 and even that is more than $100 above where I think it should be. I think $249/$300 is where I (my opinion) would say it's sweet spot is. The BOM can't be anymore than say $150 (screen, and processor are the main drivers of the cost for active components and the CPU/GPU) so $300 or there abouts should yield a nice top line margin of 5k basis points. On another note it will be interesting to see how NVIDIA bills and structures their cost centers for this project -do they bill Shield full price for the CPU or do just roll in the cost?
If you own the stock sell calls when the stock trends up. I think puts at these levels are equally as good and if I were you I'd do both. I sold 13 calls at about April 12.75 anticipating the flat movement through most of the summer. Instead of selling puts I bought 10k more at about 12.35 in lieu of selling 12.50 puts and I wish I waited another day or so and sold 12 puts (didnt think it run down this far with the buyback). Timing the market has been almost impossible for me. However, with my call selling and average down purchases I am in at about 12 even. I am very confident that we see 16-18 before the year end.
From my experience x86 for true embedded applications is absolutely absurd. It can almost compete in more complex dedicated systems like media devices but even then it falls behind in hurry. We all but ruled out x86 cores as their cost, thermal profiles, and performance all lag behind many arm offerings. Unless you need to run windows embedded there really is little benefit to x86. The system we are aiming to replace right now was built on single core atom and costs close to $1k to produce. During our evaluation process we benchmarked graphics and computing performance of some of the more readily availble x86 cores and compared the results to several ARM chips that cost half as much and we found that not only did ARM crush x86 in the thermal profile and compute per watt but in raw peerformance both graphic and computational. We had to move up to laptop class processors before we saw x86 outperform the ARM cores we tested. However, those types of cores can't even masquerade as dedicated/embedded system class proc - at a cost at 5x and a thermal profile categorically unsuitable for an embedded/dedicated device. So, it's possible that GE and companies alike are moving to x86 in devices like image scanners, large dedicated security devices and things like that where a few hundred if not more doesn't affect your margins all that much but with larger quantity smaller devices x86 simply isn't economical for a lot of dedicated/embedded devices.
This is a great a video and super informative for anyone interested in what the possibilities are for cloud gaming. I really wanted to comment on this post as a epilogue to our conversation yesterday. There are two things that stuck out to me in this video which I think are very important mentions.
1. Nvida mentioned GRID working over LTE (4?) and not just wired broadband. So, WOW! That is the first time I heard that from "horses" mouth and I really think (coming from a technology background) that running over LTE is a testament to the validity of the technology. If you recall in my post yesterday I commented that I thought if the tech was good it would be adopted - well, I think this opens even more doors and WILL amplify the adoption very possibly on an order of magnitude than if it were wired only. Secondly, having this work over LTE, further amplifies the synergy between GRID- Shield- Tegra.
2. Another point I made was the unique product offering Nvidia has over its competition and thus the technical/product leverage they have (could have) specifically in the gaming/media consumption domain. Well, at about 6:50 seconds in while demonstrating GRID on the (ASUS I believe) T3 tablet he (and I will paraphrase here comments; "...this has a T3 processor in it and it is optimized for cloud gaming because it does a better job of decoding the video." I think that comment speaks volumes - although the stream is simply an H.264 stream which could be decoding on any proc( CPU/GPU) either on the die or in the software stack it shows me that from, at least a marketing perspective, Nvidia is attempting to leverage it's branding and the perception that its in house products compliment one another better than the competitions. Not much unlike Qualcomm and the marketing/tech/advantage it had with an on die base-band.
I sort of share that hunch. I think that all industries, over time, become stagnant, vanilla and dated and need some kind of "disruptive" refresh in order to breathe new life into them. So, I am definitely a proponent of cloud gaming, as gaming has become stale, but any guess I have as to it's rate of adoption would be pure conjecture at this point. I think the most limited factor is performance - if GRID works well then so will cloud gaming.
I see things like this; I think Nvidia has solid product footprint in several if not all of the areas of content delivery and consumption. There isn't one single player in the P.C., Mobile or Enterprise domains that has as complete or as complimentary of a product offering as Nvidia when it comes to gaming and media. It seems obvious to me that Nvidia, unlike it's competition, is in a unique position to leverage all of it'[s technology/IP/products to provide a unique user experience as it pertains to media and games. I think streaming games to shield on your own "private cloud" via your PC is the start of that and possibly GRID on top of shield will become the next.
You can look at this way by asking this question; if cloud gaming does gain traction wouldn't it stand to reason that Nvidia "marks" it's Tegra silicon in such a way that it has a SIGNIFICANT technological advantage when used with GRID over the competition? So, as to make sure that more and more devices (mobile, set top boxes, t.v.s etc) have a Tegra chip in em'.
Keep in mind; that is just one of many points of differentiation Nvidia has over its competition when it comes to media and a great example of their product synergy (sorry for the buzzword). That is why I bought Nvidia over it's competition. :-)
That is not the definition of opportunity cost. Opportunity cost is an intra-business concept which is self pertaining. It has nothing to do with comparative cost. What you seem to be referring to is comparative advantage which is inter-business. At any single instance in time a businesses capacity is FIXED which means your total output is fixed - so, you can either spend that capacity on all of A or all of B or some combination of the two but NOT both. Now, unless I am completely misreading Nvidia's words here, which I doubt, Nvidia is simply saying we could either do A (consoles) or B (something else - grid, tegra etc) but not all of both and it seems to me that they're (based on they're on words) are betting B will give them a higher ROI.
J7, I totally get that. I personally think that the ARM offering will be competitive and in many cases advantageous in both the desktop and server market. ARM and CO are just starting to explore that space, so it's coming. I will tell you from experience that Intel and MS ABSOLUTELY DO have reason for concern here.
However, to clarify my original point T4 and the A15 WON'T run a 64 bit code with out a layer of abstraction (software) in between. Simply said - 32 bit hardware is not capable (well in most conventional inceptions) of running 64 bit code.
That beings said; I would be willing to bet that we, as you mentioned, we will see an ARM (A50 - most likely) in Macs but it won't happen, I think, until there is a 64bit ARM chip out there (which T4 is not) unless as you seemed to suggest that they downgrade the Air to iOS - but I highly doubt it, if for nothing else, because of SW compatibility. Essentially you'd create Apples version of RT by doing this.
Ahhhh...[T4] A15/ARMv7 (IS) is a 32bit architecture. Maybe I'm confused by the context but I am guessing WinRT 64bit would be moot without involving an OS subsystem and some thunking mechanism to down address etc...basically a hyper visor.
"Intel have never used this benchmark. It is also an uncontrolled user benchmark run over the internet so there is no real standardization between different tests and you can get different results purely depending what's running on your machine"
Haha - funny to read you arguing anything about disparate testing and inconsistency...this kind of discourse seems to only be part of your tone when it suites your needs. Apparently, stringing several panda boards together doesn't seem to merit disparity in your mind....hahahahah.
I have to believe that it's only a matter of time (as in the duration of MWC) before some wins are announced. I just don't see Qualcomm's appeal at this point - maybe I see things through green glasses but their offerings aren't all that special. It will be interesting, though, to see how competing production products match up. If you are to believe any of the T4 benchmarks (and I'm not one for contrived test cases like this) the snapdragon 600 isn't even a match for the T4i even at twice the die space, power and cost. Maybe it's me....as soon as I sell the stock takes off. :)
It does appear that Qualcomm is on their heels, but you wouldn't know by comparing the price action on the stock today...up 2% while nVidia is struggling to stay green. The stocks PE has yet again been adjusted lower despite the product lineup...investors are really missing the boat here or they are just not being convinced there is upside. No doubt many are sidelined to see if indeed the design wins rumor holds true. [sigh]