Q1 2024 GSI Technology Inc Earnings Call

In this article:

Participants

Didier Lasserre; VP of Sales; GSI Technology, Inc.

Douglas M. Schirle; CFO; GSI Technology, Inc.

Lee-Lean Shu; Co-Founder, President, CEO & Chairman; GSI Technology, Inc.

George Gascar

Jeffrey M. K. Bernstein; VP; Cowen Inc.

Luke Bohn

Nicolas Emilio Doyle; Associate; Needham & Company, LLC, Research Division

Presentation

Operator

Ladies and gentlemen, thank you for standing by. Welcome to GSI Technology's First Quarter Fiscal 2024 Results Conference Call. (Operator Instructions) Before we begin today's call, the company has requested that I read the following safe harbor statements. The matters discussed in this conference call may include forward-looking statements regarding future events and the future performance of GSI Technology that involves risks and uncertainties that could cause actual results to differ materially from those anticipated. These risks and uncertainties are described in the company's Form 10-K filed with the Securities and Exchange Commission.

Additionally, I have also been asked to notify you that this conference call is being recorded today, July 27, 2023, at the request of GSI Technology. Hosting the call today is Lee-Lean Shu, the company's Chairman, President, and Chief Executive Officer. With him are Douglas Schirle, Chief Financial Officer, and Didier Lasserre, Vice President of Sales. I would like now to turn the conference over to Mr. Shu. Please go ahead, sir.

Lee-Lean Shu

Good day, everyone, and welcome to our First Quarter Fiscal Year 2024 earnings call. We are happy to update you on our achievements of milestones on our journey toward innovation and growth. Our dedication and focus has allowed us to make good progress during our first quarter of fiscal 2024.

Let's start with our progress on advancing our growth and innovation objectives. In line with our commitment to land Gemini-I customers, we have moved forward to demo with 2 of our SAR targets. Additionally, we added new resources to address the Fast Search market and home our product for this application. Didier will provide more color on this in his comments.

Additionally, I'm pleased to share that version 2 of our L-Python compiler stack is on track for release to beta customers by the end of this summer. This marks a significant step forward in our product roadmap, enabling us to deliver cutting-edge solutions and drive customer satisfaction. The L-Python is designed to make it easy for other developers to contribute and improve the software. The appeal of L-Python is that it can be used on different operating systems like Windows, LINUX, and MacOS. The reason the L-Python is so fast is the performance optimization at both the high level and the low level. This means we tried to make the code more efficient before running it. Additionally, the L-Python allows for easy customization of the different ways it can convert code, which can be useful for specific needs or preference. Not only is the L-Python fast and flexible, but the stack is also usable for other applications that we believe could readily create an ecosystem beyond the APU.

We are closing in on successfully completing the tape out of Gemini-II. This is expected to be finalized and sent off to TSMC in the next few weeks. This tape out is a major achievement and showcases our commitment to push the boundaries of AI technology. ML tool is extremely competitive and the successful completion of this milestone serves as a testament to our talented team's hard work and expertise. We anticipate centering the solution during the second half of calendar year 2024.

We remain focused on driving innovation, delivering exceptional products, and leveraging our strengths to foster strategic partnerships that will help propel our company forward. The strategic addition to our team reinforces our commitment to drive growth, fostering partnership and delivering innovative solutions to our customers. We are excited about opportunities and the value these individuals will bring to our organization as they work with our dedicated team to position us for success. I want to thank our employees, customers and shareholders for their unwavering support and commitment. Together, we will continue to build a bright future for our company. Now I will hand the call over to Didier, who will discuss our recent development and sales activities. Please go ahead, Didier.

Didier Lasserre

Thank you, Lee-Lean. I want to start by addressing a point mentioned earlier by Lee-Lean. We have strengthened our team with the addition of 2 highly skilled professionals who will play pivotal roles in developing strategic partnerships with hyperscalers and establishing our presence in the fast vector search market. These individuals bring a wealth of knowledge and extensive experience in their respective fields.

One of our new team members, who will assume the senior data scientist role, will lead our team on various projects and offload some of the workload from our division in Israel. With this team, we will transfer some functions to the U.S., including developing software applications functions and undertaking government-related projects that require collaboration with U.S.-based employees. Our U.S. data science team will play a crucial role in assisting customers with the compiler and conducting benchmarks across different platforms. Our new data scientist will collaborate with this team to optimize our plug-in for fast vector search, paving the way for the successful deployment of this business line for our company.

Our second new resource brings a wealth of experience from the semiconductor sector, having worked for the leading FPGA companies. This background has afforded him extensive industry connections which will be invaluable as we strive to engage and form partnerships with our top hyperscalers. We will lead the building of our platform to -- I'm sorry, he will lead the building of our platform to explore strategic partners for our APU technology to develop service and licensing revenue resources to fund future APU development.

On the last call, we mentioned we were working with a major hyperscaler based on Gemini architecture for inference of large language models. This relationship holds great potential for our growth, and we recently added additional resources to this team. We have conducted a feasibility study exploring Gemini architecture and I am delighted to say that we are making great progress in this prospect. The study specifically focuses on GPT inference utilizing a future APU. We found that the APU when compared to existing technologies can achieve significantly enhanced performance levels while utilizing the same process technology.

GPT is a memory-intensive application. It requires a very large and very fast memory hierarchy from external storage memory all the way to the internal processors working memory. In the GPT175 built-in model, 175 gigabyte of fast memory is required to store the model's parameters. This can be accomplished by incorporating a processor die and several HBMs, which are high bandwidth memories. And they'll be put on a 2.5d substrate. It also requires large internal memory and very fast internal memory next to the processor core as a working memory to support the large matrix multiplication performed by the processor core.

APU architecture has inherently large built-in memory and large memory bandwidth that not only provides memory throughput but also supports very high-performance computation. Gemini can achieve similar peak tops per watt as state-of-the-art GPUs on the same process technology node. However, with our massive L1 size and large bandwidth, the APU can sustain average tops nearly the same as peak tops unlike a GPU. In a single module composed of a 5-nanometer Gemini die plus 6 HBM3 die, we have calculated that we could achieve more than 0.6 token per second per watt with the input size of 32 tokens to generate a context of 64 tokens in GPT175 built-in model.

This output is more than 60x the performance that could be delivered by a state-of-the-art GPU in a slightly better technology node. This study was done in conjunction with laying out the development road map for Gemini-III to move further into generative AI territory.

The APU holds a distinctive advantage in delivering low power consumption at peak performance levels given the in-memory processing capability. As we have seen, generative AI applications like ChatGPT are becoming more capable with each generation. The driving force behind this improvement capability is the number of parameters used by the large language models that power them. More parameters require more computation, leading to higher energy usage and a much larger carbon footprint. To help combat the carbon footprint growth, researchers are exploring new ways to compress data to reduce memory requirements. These are trade-offs between the formats that researchers are investigating. To navigate these trade-offs, they need a flexible solution. Unfortunately, GPUs and CPUs lack this flexibility and are limited to a small fixed set of data formats.

GSI Technology's APU technology provides the flexibility to explore new methods. By allowing computation to be performed at the bit level, computation can be performed on any size data element with a resolution as fine as a single bit. This will allow innovative solutions to be developed and reduce energy by optimizing the number of usable bits for each data transfer. As we work with potential strategic licensing partners, we can increase the awareness of our capabilities to solve some of AI's biggest challenges.

Regarding our work on Gemini-1 solution, we have made notable progress with 2 of our SAR targets, underscoring our commitment to expanding our presence in this market. We have set a goal of closing a sale in FY2024 with one of these customers. As I mentioned, we recently added resources to support our beta fast vector search customers. With additional resources in place, we anticipate building a SaaS revenue source with customized solution for fast vector search customers before the end of this fiscal year.

Let me switch now to the customer and product breakdown for the first quarter. In the first quarter of fiscal 2024, sales to Nokia were $1.9 million or 33% of net revenues compared to $1.3 million or 14% of net revenues in the same period a year ago and $1.2 million or 21.8% of net revenues in the prior quarter. Military defense sales were 33.8% of first quarter shipments compared to 22.3% of shipments in the comparable period a year ago and 44.2% of shipments in the prior quarter. SigmaQuad sales were 58.6% of first quarter shipments compared to 44.8% in the first quarter of fiscal 2023 and 46.3% in the prior quarter. I'd now like to hand the call over to Doug. Please go ahead, Doug.

Douglas M. Schirle

Thank you, Didier. GSI reported a net loss of $5.1 million or $0.21 per diluted share on net revenues of $5.6 million for the first quarter of fiscal 2024 compared to a net loss of $4 million or $0.16 per diluted share on net revenues of $8.9 million for the first quarter of fiscal 2023 and a net loss of $4 million or $0.16 per diluted share on net revenues of $5.4 million for the fourth quarter of fiscal 2023.

Gross margin was 54.9% in the first quarter of fiscal 2024 compared to 60.2% in the prior year period and 55.9% in the preceding fourth quarter. The year-over-year decrease in gross margin for the first quarter of fiscal 2024 was primarily due to the impact of fixed manufacturing costs and our cost of goods on lower net revenue.

Total operating expenses in the first quarter of fiscal 2024 were $8.2 million compared to $9.3 million in the first quarter of fiscal 2023 and $6.9 million in the prior quarter. Research and development expenses were $5.2 million compared to $6.6 million in the prior year period and $5 million in the prior quarter. Selling, general and administrative expenses were $3 million in the quarter ended June 30, 2023, compared to $2.7 million in the prior year quarter and $1.9 million in the previous quarter. We estimate that through June 30, 2023, we have incurred research and development spending in excess of $140 million on our AP product offering.

First quarter fiscal 2020 operating loss was $5.1 million compared to an operating loss of $3.9 million in the prior year period and an operating loss of $3.9 million in the prior quarter. First quarter fiscal 2024 net loss included interest and other income of $80,000 and a tax provision of $51,000 compared to $26,000 in interest, other expense, and a tax provision of $60,000 for the same period a year ago. In the preceding fourth quarter, net loss included interest and other income of $101,000 and a tax provision of $191,000.

Total first quarter pretax stock-based compensation expense was $820,000 compared to $638,000 in the comparable quarter a year ago and $515,000 in the prior quarter. At June 30, 2023, the company had $27.7 million in cash, cash equivalents and short-term investments compared to $30.6 million in cash, cash equivalents and short-term investments at March 31, 2023. Working capital was $32.1 million as of June 30, 2023, compared to $34.7 million at March 31, 2023, with no debt.

Stockholders' equity as of June 30, 2023, was $48.6 million compared to $51.4 million as of the fiscal year ended March 31, 2023. During the June quarter, the company filed a registration statement on Form S-3 so that the company would be in a position to quickly access the markets and raise capital if the opportunity arises. Operator, at this point, we'll open the call to Q&A.

Question and Answer Session

Operator

Thank you. (Operator Instructions) Our first question came from Nic Doyle, Needham & Company. Please go ahead.

Nicolas Emilio Doyle

Nic Doyle from Needham. Just first, could you expand on the drivers behind the gross margin this quarter and next quarter? We see a little bit of a decline this quarter and you expect it to increase next quarter. Could you just expand on why that's happening?

Douglas M. Schirle

Yes. It's really related to product mix. We do our best effort at forecasting what we believe the revenues are going to be during the quarter. But obviously, with only about 1/3 or so of the quarter booked at the beginning of the quarter, we have to estimate where the revenues are going to come from. And it's strictly tied to product mix, nothing more.

Nicolas Emilio Doyle

Okay. Could you just tell us what part of the mix was higher this quarter that's driving a lower margin?

Douglas M. Schirle

Yes. The biggest thing that impacts the margin is we have quite a bit of military business, and that has the highest margin. Alcatel -- there's some revenues or -- I'm sorry, Nokia revenues are generally at a reasonable level. And that also is good margin. It really is dependent on probably the biggest factor is military sales at this point.

Nicolas Emilio Doyle

Okay. Great. Makes sense. You talked about how you tested your APU, which can basically sustain higher tops and drive better performance per watt with the specific GPT application. Can you just expand on how that's done, how your APU differentiates from CPUs and GPUs on the market? Is it entirely to do with the ability to do computations at the bit level? That was my understanding. Yes, any detail there would be great.

Lee-Lean Shu

Yes. First of all, GPU has a very, very small cache. And I think it's good for the graphic processing. But when you talk about the huge parameter in the large language model, they can only do a fraction of what they can do from the top point of view. And in the GPU, we have a huge memory inside the chip, and we calculate the top particularly from how we can support the processing with our memory. That's how we come up with the top. That's why we have average top just same as our big top. I hope I answered your question.

Nicolas Emilio Doyle

Okay. If I could just sneak one more, I think in the past you've talked about the cost of Gemini-II is about $2.5 million. Is that still the case? And is that entire tape out cost behind us, or it's still ongoing?

Lee-Lean Shu

Just the tape-out cost.

Douglas M. Schirle

Yes. The $2.5 million is the tape out cost. We will have a tape out -- the expense could hit later this quarter or the early part of the October quarter. But yes, that's just the tape out quarter. We've incurred, as we said in our comments, probably in excess of about $140 million developing this product line, and that's both G-I and G-II.

Lee-Lean Shu

Just one comment. We published a white paper on our website, and we have a further discussion on why APUs are good for the large language model. If you are interested, look at www.gsitechnology.com.

Operator

(Operator Instructions) Our next question comes from Luke Bohn, Private Investor.

Luke Bohn

Thanks. In terms of that study, did you mention that that was projecting a 5-nanometer architecture? Therefore, the study about comparing with GPUs and key performance?

Lee-Lean Shu

Correct.

Luke Bohn

I'm supposing, based on your understanding of the engineering and the physics of your APU architecture that you projected that is feasible. And is that the case? And can you project even further to say that yes, there is a limit that's lower in terms of reducing to get even more dense architecture?

Lee-Lean Shu

Yes. We picked 5-nanometer because at this moment, the sale of our processor is either a 5 or 4-nanometer. We want to have apple-to-apple comparison, so we picked the 5-nanometer as a target base. Of course, if we want to implement a real chip, I think we want to do it with even more advanced technology, just stand with everybody else.

Luke Bohn

Okay. So that is -- the intended plan is to make the leap basically from your current I think you said 16 was Gemini-II, all the way to the 5 for Gemini-III?

Lee-Lean Shu

Yes. No, no -- Gemini -- I'm sorry. Gemini-III is to be determined. We picked 5 nanometers because everybody else is on the 5-nanometer, so it's a fair comparison.

Didier Lasserre

Right. And so yes, that 5-nanometer was picked just for a comparison for the study because that's what, as Lee-Lean just said, that's what the GPUs are on is 5-nanometers. We wanted to do a straight comparison on technology. That does not mean Gemini-III would be on that technology. It could be something more aggressive.

Luke Bohn

Ahh, okay, so not a limit point?

Didier Lasserre

Correct.

Luke Bohn

Excellent. And in terms of the -- you all having larger memory cache and all the other advantages of flexibility, flexibility in the memory that I read about in the white paper, how does that apply to comparing the APU to GPUs and machine vision? For both like real-world vision, talking about EVs, autonomous vehicles, kind of referencing the Tesla earnings call saying that they're buying as many Invidia GPUs as they can get their hands on. And your all's earlier references of being able to provide APU to that market as well as more of the -- more of the abstract machine vision. Drug discovery and genetic medicine, things like that. Are you seeing still similar, yes, advantages?

Didier Lasserre

The advantage -- yes, the answer is our Gemini-1 we understood was not a fit for what you talked about, ADAS. Gemini-II we anticipate to be a better fit just because of the lack of an FPGA on the board with the Gemini-II. But the fundamental unique architecture is going to be the same, which is the fact that we are doing the computation or the search on the memory bit line in place, and so we're not going off chip to fetch the data and then going back and rewriting the data. That's -- the fundamental unique architecture that we have is regardless of the market and is available or primarily there with Gemini-I and Gemini-II.

Luke Bohn

Awesome. Yes, I just wanted to get that clarification about it since Lee-Lean talked about the performance being kind of, or for GPUs being app for visual processing. I wanted to yes, get that clarification about the more broader kind of machine vision, visual processing markets there. Yes, that's great. I think I have one more question. Yes, definitely applaud you all for getting, moving forward with the fast vector search because there have been so many announcements recently about the value of large vector search, NLP, neural networks broadly. And seeing how much that TAM you all can address, definitely good there. Good to hear that you're putting some more traction to that pathway. And I have one just kind of funny curiosity. I've noticed the name Gemini associated with accelerated computing, most recently, most prominently with Google. And it's always made sense to me in terms of parallel processing. You have the name Gemini historical reference. But wondering, SpinQ and Google have now also adopted Gemini. I'm wondering if that is at all an encouragement on your all's intellectual of your trademark or if you find that to just be kind of a humorous affirmation since you're the first Gemini?

Didier Lasserre

No, we definitely looked into it. And the issue we have is that our trademark is for hardware devices, semiconductor device, and Google is software related, so there's no overlapping.

Luke Bohn

Okay. That makes sense. Has anything shifted? I'm not sure if you've actually crunched numbers, but in terms of your TAM and SAM and these new focuses on the large language models, yes, how do you see kind of the concrete, your concrete addressable market projections updated at this point in terms of timeline and size?

Didier Lasserre

Yes. We're still working on those TAMs for that. And there's different segments, right? You have the retrieval and you have the generative. And so those are 2 different areas. We can certainly address the retrieval now with Gemini-I and Gemini-II. We certainly feel for the generative side it's going to be more with Gemini-III. But yes, we're working on those TAM/SAMs now, they're just not available yet.

Luke Bohn

Yes, I know it's a hard thing to value, which is reflecting all over the analyst side of things. I think that's all I've got. Thank you.

Operator

Our next question came from Jeff Bernstein, TD Cowen.

Jeffrey M. K. Bernstein

A couple of questions for you. One, just on the last answer, you were talking about Gemini-I and Gemini-II addressing retrieval. You mean queries there? And when you say addressing generative, are you talking about training? Or just clarify that a little bit.

Didier Lasserre

The response. Yes, you're retrieving the data, and that's something we do very well now, but it's really generating the response. And so that requires very, very high memory bandwidth, which we have, and very, very high memory cache in general. I mean, that's why we talked about pairing up with HBM3 for that. And that's more on the generative side.

Jeffrey M. K. Bernstein

Okay, so training, Gemini training?

Lee-Lean Shu

No, no. Inference.

Didier Lasserre

It's still inference, yes, it's not training.

Jeffrey M. K. Bernstein

Okay, still inference. Okay. And then as long as you were talking about the potential for a 5-nanometer or more aggressive kind of Gemini-III line with, what is the current tape out cost for -- I know that you're not a processer, more like a memory cell might be less expensive. But what do you think a tape out cost of 5-nanometer would be now?

Lee-Lean Shu

Well, 5-nanometer, the mass cost itself is $15 million, 1-5. To have a design like a 5-nanometer, we probably need to have $100 million for the design. What we are doing right now is we really are looking for a partner. We are not going to do it ourselves.

Jeffrey M. K. Bernstein

Okay. And then I just want to talk about the capital situation. You've now got a registration statement in place. Unfortunately, you missed the big runoff in the stock. Why wouldn't you preferentially sell and leaseback the headquarters for funds and then have some more tangible progress to show before we started talking about raising equity?

Douglas M. Schirle

Well, we have looked into the sale of the building. And we haven't decided to do that yet, but that still is an option. Property values are significantly higher than when we purchased the building many years ago. And it is an opportunity that we have considered and we've discussed it with the Board, but no decision as of yet has been made to sell the building.

Jeffrey M. K. Bernstein

Got you. Okay. And then just on the Nokia business, that -- if I remember correctly, you guys are in now at this point, the pretty old Nokia 7970 and 7950 routers. I don't even see any reference anymore to the 50. What's going on there? How much lead time would you get if they were end of life-ing that? Would there be some kind of lucrative end of life revenue that you might get out of that, etc.? Just give us a little feeling for your understanding of where you are with the Nokia business.

Didier Lasserre

Sure. As you said, it's in the 7750 and 7950 platforms there, and they have extremely long-life cycles as we've been seeing. We get a 12-month rolling forecast from Nokia. And so far, and that's as far out as they go, and the 12 months still looks healthy. What they did do a while back is they did what's called a mid-life kicker to try and give a little bit more performance to those existing systems. And what that meant for us is that it went from a 72-megabit density into a 144-megabit density part for that mid-life kicker. The ASPs are obviously higher on the larger density parts. What we saw is, even though some of the volumes have come down over time, it's been fairly flat on the revenue side just because the increase in the ASPs offset the decrease in the quantity. At this point, it's still going, we still have the 12-month forecast that looks healthy, and that's as much visibility as we get.

Jeffrey M. K. Bernstein

Got you. And then obviously, there was some movement around the chip shortages and packaging shortages and that kind of thing. Are we now to a more normalized rate here going forward?

Didier Lasserre

The lead times have become more normalized. The pricing or the costs have not. The price increases that were subjected to us, which in turn forced us to raise prices to our customers, they're still there. We've kept our ASPs up, and we'll keep them there until there's any kind of movement from TSMC or any of the substrate folks that raised their prices. But at this point, the real change is the lead time. Lead times have come down to a more normalized area.

Jeffrey M. K. Bernstein

Got you. But just in terms of inventories, where we should be at a more normal kind of inventory situation going forward here?

Douglas M. Schirle

Yes. That's what we fully believe and our inventories have dropped in the last quarter or two and we expect them to drop the next couple of quarters or so.

Operator

(Operator Instructions) Our next question comes from George Gascar, Private Investor.

George Gascar

It's George Gascar. Just again, I'd like to deal on the financing situation. Based on your current cash position and looking at your current development progress profile, what do you see is your forward view on the need to exercise financing requirements?

Douglas M. Schirle

Well, at this point, given the materials we've discussed with the Board, this fiscal year we will certainly burn some cash, maybe $12 million to $13 million if the revenue numbers hold up. And if the revenue numbers hold up next year, we could start turning the corner and actually having more cash at the end of fiscal 2025 than at the end of fiscal '24.

George Gascar

I see. So, what you're saying is, that based on the way you're moving along, your present cash position is sufficient for what you're talking -- what your targets are and the development that you see over the next year?

Douglas M. Schirle

Currently, that's true. That's the situation.

Operator

Thank you. There is no further question at the time. I would like now to turn the floor back over to Mr. Shu for closing comments. Please, sir, go ahead.

Lee-Lean Shu

Thank you all for joining us. We look forward to speaking with you again when we report our second quarter fiscal 2024 results. Thank you.

Operator

This concludes today's teleconference. You may disconnect your lines at this time. Thank you for your participation.

Advertisement