Did MetaRAM Play a Role in Google’s Infrastructure Update to Caffeine>
In August, Google announced an upgrade to Google’s infrastructure named Caffeine, aimed at making the search engine faster. One of the developers behind the upgrade described it as an upgrade to the Google File System.
At the US Patent Office assignment database this morning, I noticed patents and patent applications assigned to Google on November 18, 2009, originally granted to startup MetaRAM.
If a search engine wanted to upgrade its capabilities, it might also upgrade the hardware that it uses. MetaRAM’s patents could potentially transform Google’s computing capacity dramatically.
I have no idea if Google’s Caffeine upgrade also includes a memory upgrade from MetaRAM at this point, but I suspect that the MetaRAM patent assignments could be related. Ownership of MetaRAM’s chip patents make that more likely.
The Wall Street Journal’s blog told us about the demise of MetaRAM, a startup with some very high profile founders, board members, and executives, including a CEO who was chief technology officer of Advanced Micro Devices Inc for ten years, and a Board of Directors member who was a former chief scientist of Sun Microsystems Inc. – Turning Out The Lights: Semiconductor Company MetaRAM
An interview with the original and former CEO of MetaRAM from May of 2008, provides a lot of insight into the direction that MetaRAM was taking – Pioneering Change in the Memory Market: MetaRam Visionary Fred Weber.
Did Google acquire MetaRAM or just the patent filings from the company? The WSJ blog post tells us that the company shut down without providing a date for its closing. However, LinkedIn profiles for people from MetaRAM still list their positions with the company as their present place of employment.
I haven’t been able to locate much in the way of recent news about MetaRAM, nor much that associates them with Google.
Will Google keep this new technology from MetaRAM in-house, and use it to reduce the costs of servers and workstations by a significant amount while increasing the amount of memory available to those systems? Or will they license or sell the technology directly, or both? So little is known at this point. There hasn’t been an announcement from Google or anyone from MetaRAM yet that I could find. I haven’t seen any rumors of the transaction behind the assignment of the patent filings anywhere on the Web either.
I’ve listed the granted patents and the patent applications from MetaRAM below. There are 49 of them in total; a number of them were filed more than once for one reason or another.
Granted Patents from Metaram:
Integrated memory core and memory interface circuit (7,515,453)
Abstract
A memory device comprises a first and second integrated circuit dies. The first integrated circuit die comprises a memory core as well as a first interface circuit. The first interface circuit permits full access to the memory cells (e.g., reading, writing, activating, pre-charging and refreshing operations to the memory cells). The second integrated circuit die comprises a second interface that interfaces the memory core, via the first interface circuit, an external bus, such as a synchronous interface to an external bus. A technique combines memory core integrated circuit dies with interface integrated circuit dies to configure a memory device. A speed test on the memory core integrated circuit dies is conducted, and the interface integrated circuit die is electrically coupled to the memory core integrated circuit die based on the speed of the memory core integrated circuit die.
Abstract
A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for autonomously performing a power management operation in association with at least a portion of the memory circuits.
Abstract
A memory circuit power management system and method are provided. An interface circuit is in communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to perform a power management operation in association with only a portion of the memory circuits.
Interface circuit system and method for performing power saving operations during a command-related latency (7,581,127)
Abstract
A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for performing a power management operation in association with at least a portion of the memory circuits. Such power management operation is performed during a latency associated with one or more commands directed to at least a portion of the memory circuits.
Methods and apparatus of stacking DRAMs (7,379,316)
Methods and apparatus of stacking DRAMs (7,599,205)
Abstract
Large capacity memory systems are constructed using stacked memory integrated circuits or chips. The stacked memory chips are constructed in such a way that eliminates problems such as signal integrity while still meeting current and future memory standards.
Power saving system and method for use with a plurality of memory circuits (7,580,312)
Abstract
A power-saving system and method are provided. In use, at least one of a plurality of memory circuits is identified that is not currently being accessed. In response to identifying the at least one memory circuit, a power-saving operation is initiated in association with the at least one memory circuit.
System and method for simulating an aspect of a memory circuit (7,609,567)
Abstract
A system and method are provided for simulating an aspect of a memory circuit. Included is an interface circuit that is in communication with a plurality of memory circuits and a system. Such interface circuit is operable to interface the memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. Per various embodiments, such aspect may include a signal, a capacity, a timing, and/or a logical interface.
System and method for power management in memory systems (7,590,796)
Abstract
A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of physical memory circuits and a system. The interface circuit is operable to interface the physical memory circuits and the system for simulating at least one virtual memory circuit with a first power behavior that is different from a second power behavior of the physical memory circuits.
Pending Patent Applications from MetaRAM
Some of the patent applications below, originally assigned to MetaRAM. share names with granted patents above, and may contain the same or very similar content. There are also some pending patent applications with the same name and abstracts, and those have been grouped together below.
Apparatus and Method for Power Management of Memory Circuits by a System or Component Thereof (20080082763)
Abstract
An apparatus and method are provided for communicating with a plurality of physical memory circuits. In use, at least one virtual memory circuit is simulated where at least one aspect (e.g., power-related aspect, etc.) of such virtual memory circuit(s) is different from at least one aspect of at least one of the physical memory circuits. Further, in various embodiments, such simulation may be carried out by a system (or component thereof), an interface circuit, etc.
Combined Signal Delay and Power Saving System and Method for Use with a Plurality of Memory Circuits (20080123459)
Abstract
A system and method are provided. In use, at least one of a plurality of memory circuits is identified. In association with the at least one memory circuit, a power saving operation is performed and the communication of a signal thereto is delayed
Emulation of Abstracted DIMMs using Abstracted DRAMs (20090216939)
Abstract
One embodiment of the present invention sets forth an abstracted memory subsystem comprising abstracted memories, which each may be configured to present memory-related characteristics onto a memory system interface.
The characteristics can be presented on the memory system interface via logic signals or protocol exchanges. The characteristics may include any one or more of, an address space, a protocol, a memory type, a power management rule, many pipeline stages, many banks, a mapping to physical banks, several ranks, a timing characteristic, an address decoding option, a bus turnaround time parameter, an additional signal assertion, a sub-rank, many planes, or other memory-related characteristics. Some embodiments include an intelligent register device and/or, an intelligent buffer device.
One advantage of the disclosed subsystem is that memory performance may be optimized regardless of the specific protocols used by the underlying memory hardware devices.
Abstract
A memory circuit power management system and method are provided. An interface circuit is in communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to perform a power management operation in association with only a portion of the memory circuits
Abstract
A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for autonomously performing a power management operation in association with at least a portion of the memory circuits.
Memory Circuit Simulation System and Method with Power Saving Capabilities (20080027697)
Abstract
A system and method are provided, including a component in communication with a plurality of memory circuits and a system. The component is operable to interface the memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. The component is further operable to perform a power-saving operation.
Memory Circuit Simulation System and Method with Refresh Capabilities (20080027703)
Abstract
A system and method are provided, including an interface circuit in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the plurality of memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. The interface circuit is further operable to control the refreshing of the plurality of memory circuits.
Memory Circuit System and Method (20090024789)
Memory Circuit System and Method (20090024790)
Abstract
A memory circuit system and method are provided in the context of various embodiments. In one embodiment, an interface circuit remains in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for performing various functionality (e.g., power management, simulation/emulation, etc.).
Memory Device with Emulated Characteristics (20080056014)
Memory Device with Emulated Characteristics (20080126687)
Memory Device with Emulated Characteristics (20080103753)
Memory Device with Emulated Characteristics (20080126692)
Memory Device with Emulated Characteristics (20080126689)
Memory Device with Emulated Characteristics (20080109206)
Memory Device with Emulated Characteristics (20080126688)
Memory Device with Emulated Characteristics (20080104314)
Abstract
A memory subsystem is provided, including an interface circuit adapted for communication with a system and a majority of address or control signals of the first number of memory circuits. The interface circuit includes emulation logic for emulating at least one memory circuit of a second number.
Memory module with memory stack and interface with enhanced capabilities (20070195613)
Memory module with memory stack (20080126690)
Abstract
A memory module, which includes at least one memory stack, comprises a plurality of DRAM integrated circuits and an interface circuit. The interface circuit interfaces the memory stack to a host system to operate the memory stack as a single DRAM integrated circuit.
In other embodiments, a memory module includes at least one memory stack and a buffer integrated circuit. The buffer integrated circuit, coupled to a host system, interfaces the memory stack to the host system to operate the memory stack as at least two DRAM integrated circuits. In yet other embodiments, an interface circuit maps virtual addresses from the host system to physical addresses of the DRAM integrated circuits in a linear manner.
In a further embodiment, the interface circuit maps one or more banks of virtual addresses from the host system to a single one of the DRAM integrated circuits. In yet other embodiments, the buffer circuit interfaces the memory stack to the host system for transforming one or more physical parameters between the DRAM integrated circuits and the host system.
In still other embodiments, the buffer circuit interfaces the memory stack to the host system for configuring one or more of the DRAM integrated circuits in the memory stack. Neither the patentee nor the USPTO intends for details outlined in the abstract to constitute limitations to claims not explicitly reciting those details.
Memory Refresh System and Method (20080025122)
Abstract
A system and method are provided. In response to the receipt of a refresh control signal, a plurality of refresh control signals is sent to the memory circuits at different times.
Memory Systems and Memory Modules (20080010435)
Abstract
One embodiment of the present invention sets forth a memory module that includes at least one memory chip, and an intelligent chip coupled to the at least one memory chip and a memory controller, where the intelligent chip is configured to implement at least a part of a RAS feature. The disclosed architecture allows one or more RAS features to be implemented locally to the memory module using one or more intelligent register chips, one or more intelligent buffer chips, or some combination thereof. Such an approach not only increases the effectiveness of certain RAS features that were available in prior art systems, but also enables the implementation of certain RAS features that were not available in prior art systems.
Method and Apparatus for Refresh Management of Memory Modules (20080028136)
Method and apparatus for refresh management of memory modules (20080109598)
Method and Apparatus For Refresh Management of Memory Modules (20080028137)
Method and Apparatus For Refresh Management of Memory Modules (20080109597)
Abstract
One embodiment sets forth an interface circuit configured to manage refresh command sequences that includes a system interface adapted to receive a refresh command from a memory controller, clock frequency detection circuitry configured to determine the timing for issuing staggered refresh commands to two or more memory devices coupled to the interface circuit based on the refresh command received from the memory controller, and at least two refresh command sequence outputs configured to generate the staggered refresh commands for the two or more memory devices
Methods and apparatus of stacking DRAMs (20070058471)
Abstract
Large capacity memory systems are constructed using stacked memory integrated circuits or chips. The stacked memory chips are constructed in such a way that eliminates problems such as signal integrity while still meeting current and future memory standards.
Method and circuit for configuring memory core integrated circuit dies with memory interface integrated circuit dies (20070014168)
Abstract
A memory device comprises a first and second integrated circuit dies. The first integrated circuit die comprises a memory core as well as a first interface circuit. The first interface circuit permits full access to the memory cells (e.g., reading, writing, activating, pre-charging and refreshing operations to the memory cells). The second integrated circuit die comprises a second interface that interfaces the memory core, via the first interface circuit, an external bus, such as a synchronous interface to an external bus. A technique combines memory core integrated circuit dies with interface integrated circuit dies to configure a memory device. A speed test on the memory core integrated circuit dies is conducted, and the interface integrated circuit die is electrically coupled to the memory core integrated circuit die based on the speed of the memory core integrated circuit die.
Multiple-Component Memory Interface System and Method (20080028135)
Abstract
A system and method are provided, wherein a first component and a second component are operable to interface a plurality of memory circuits and a system.
System and Method for Adjusting the Timing of Signals Associated with a Memory System (20080115006)
Abstract
A system and method are provided for adjusting the timing of signals associated with a memory system. A memory controller is provided. Additionally, at least one memory module is provided. Further, at least one interface circuit is provided, the interface circuit capable of adjusting timing of signals associated with one or more of the memory controller and the at least one memory module.
System and Method for Delaying a Signal Communicated from a System to at Least One of a Plurality of Memory Circuits (20080025108)
Abstract
A system and method are provided for delaying a signal communicated from a system to a plurality of memory circuits. Included is a component in communication with a plurality of memory circuits and a system. Such component is operable to receive a signal from the system and communicate the signal to at least one of the memory circuits after a delay. In other embodiments, the component can receive a signal from at least one of the memory circuits and communicate the signal to the system after a delay.
System and Method for Increasing Capacity, Performance, and Flexibility of Flash Storage (20080086588)
Abstract
In one embodiment, an interface circuit is configured to couple to one or more flash memory devices and is further configured to couple to a host system. The interface circuit is configured to present at least one virtual flash memory device to the host system. The interface circuit is configured to implement the virtual flash memory device using the one or more flash memory devices to which the interface circuit is coupled.
System and Method for Reducing Command Scheduling Constraints of Memory Circuits (20080109595)
System and Method for Reducing Command Scheduling Constraints of Memory Circuits (20070204075)
System and Method for Reducing Command Scheduling Constraints of Memory Circuits (20080120443)
Abstract
A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for reducing command scheduling constraints of the memory circuits.
System and Method for Simulating a Different Number of Memory Circuits (20080027702)
Abstract
A system and method are provided for simulating a different number of memory circuits. Included is an interface circuit in communication with a first number of memory circuits and a system. Such interface circuit is operable to interface the memory circuits and the system for simulating at least one memory circuit of a second number. Further, the interface circuit interfaces a majority of address or control signals of the memory circuits.
System and Method for Simulating an Aspect of a Memory Circuit (20090285031)
Abstract
A system and method are provided for simulating an aspect of a memory circuit. Included is an interface circuit that is in communication with a plurality of memory circuits and a system. Such interface circuit is operable to interface the memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. Under various embodiments, such aspect may include a signal, a capacity, a timing, and/or a logical interface.
System and Method for Simulating an Aspect of a Memory Circuit (20080062773)
System and Method for Simulating an Aspect of a Memory Circuit (20080133825)
Abstract
A memory subsystem is provided including an interface circuit adapted for coupling with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for emulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. Such aspect includes a signal, a capacity, a timing, and/or a logical interface.
Abstract
A system and method are provided for use in the context of a plurality of memory circuits. First information is received in association with a first operation to be performed on at least one of the memory circuits. At least a portion of the first information is stored. Still yet, second information is received in association with a second operation to be performed on at least one of the plurality of memory circuits. To this end, the second operation may be performed utilizing the stored portion of the first information and the second information.
System and Method for Translating an Address Associated with a Command Communicated between a System and Memory Circuits (20070192563)
Abstract
A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to translate an address associated with a command communicated between the system and the memory circuits.
Just question, frankly I didn’t read all the patent abstracts there is way too much, question is there has been reports saying that caffine is faster than older Google but the truth may be its just their upgrades on the hardwares what you think? Plus could it be that new search engine demands more resources from hardware point of view?
Thanks
It does logically follow that Google would be looking at hardware methods to increase performance as well. Particularly ways of saving resources (equipment using less power etc) to shave a bit off their operating costs.
Hi James,
It does make sense, doesn’t it. The acquisition of MetaRam’s patent filings is a pretty serious move, considering how many patent filings were involved, and the potential impact on Google’s computing power if they were to incorporate the memory boost that the technology can bring to them. I’m still wondering if the people associated with MetaRAM have moved to Google along with the technology.
It seems odd to me that Google would buy a patent on faster RAM just to use it in their servers – that doesn’t make sense to me. If they wanted to improve the performance of their hardware, you would think that there would be countless ways for them to do that which don’t require a patent on new memory modules. I don’t know what they paid for this, but it seems to me that if they needed more “brute force” in their computing environment that they could easily just buy more servers and get the job done a lot cheaper. I have a very hard time believing that RAM was a significant enough bottleneck in their systems that they needed to go out and acquire a patent on new RAM technology — that just can’t be a cost effective way of increasing the power of your hardware.
I think this is more likely to be unrelated to Caffeine. I mean how much sense does it make for them to acquire a memory patent just to use it in house and keep somebody else from getting it? It just doesn’t give them a big enough competitive advantage in any of their core markets to warrant acquiring for those purposes, IMO.
Could this be part of Google’s rumored router? If Google is in fact making it’s own uber-powerful line of routers, owning the patent on state-of-the-art memory modules makes a lot of sense. Very busy routers have huge demands.
Hi seoyourblog,
I was overwhelmed by the number of patent filings myself – it took a while to go through all of them, but I wanted to make sure that they were all assigned to Google, and on the same day, and that they all focused upon memory and hardware. I thought it was worth sharing links to all of the filings for anyone who might be interested in seeing more, but I’m not anticipating too many people going through all of those abstracts. 🙂
From what I’ve read, the upgrades related to Google’s Caffeine should reduce bottlenecks in the way that information is retrieved from Google’s databases by making changes to how information on different chunk servers are accessed. While there could possibly be some hardware changes involved, it sounds like the primary change involved is in how the Google File System works.
I don’t think that hardware changes are at the root of the Caffeine upgrade, but it does look like MetaRAM was actually producing chips that could be used with the kind of servers that Google may be using. Adding more hardware based memory may make a significant impact on Google’s servers
My understanding is that Netlist, Inc. has brought patent infringement against first to MetaRAm and then Google for ‘386 patent related to breakthrough memory module exhibited recently at SuperComputing 09. Netlist in 2006 or thereabouts approached Google to discuss under an MOU to provide Google with breakthrough memory modules. Google ultimately declined. Netlist brught patent infringement against MetaRAM, and was in settlement discussion with Google when Google brought a pre-emptive lawsuit for declaratory relief that it is not infringing on Netlist ‘386 patent; and even if it was infringing such patent, Google claims it is moot because such Netlist patent is invalid. See the Complaint and Answer and Counter-Complaint from Netlist which action is in central district court CA.
Netlist symbol is NLST. For transparency, I own less than 20k shares in NLST, but have no insider information or have any relation to either company. It is advisable to do your own research.
Hi Buzzlord,
It wasn’t just one patent – the USPTO assignment database shows 49 patent filings assigned from MetaRAM to Google. There’s a possibility that Google may already be using some of this memory technology, though I haven’t seen anything yet that says so explicitly. A Google router sounds interesting – more research for me to do. Thanks. 🙂
Thank you, Auditor.
I appreciate your providing some details about this controversy. I’ve started to do some research.
The patent from Netlist in question appears to be this one:
Memory module decoder
Abstract
InformationWeek wrote about Google’s response to a number of letters from NetList in an article titled Google Launches Pre-Emptive Lawsuit Against Memory Maker. It appears that Google received proposals in 2006 from NetList for server memory, but decided to use another supplier. In May of 2008, NetList’s CEO sent a letter to Google which claimed that the memory Google chose infringed NetList’s patent.
Google filed a Complaint for Declaratory Judgment against Netlist, Inc., on August 29, 2008, asking a Federal District Court in the Northern District of California, San Jose Division, for a judgment stating that “Google does not infringe any valid and enforceable claim of the ‘386 patent” or alternatively “That the ‘386 patent is invalid.”
According to a docket that I could find for the case, Netlist filed an answer and a counterclaim on November 18, 2008:
Google Inc. v. Netlist, Inc.
U.S. District Court
California Northern District (Oakland)
CIVIL DOCKET FOR CASE #: 4:08-cv-04144-SBA
Assigned to: Hon. Saundra Brown Armstrong
Referred to: Magistrate Judge Joseph C. Spero
Cause: 35:145 Patent Infringement
Date Filed: 08/29/2008
Jury Demand: Both
Nature of Suit: 830 Patent
Jurisdiction: Federal Question
In the Netlist Form 10-Q filing of November 3, 2009 are these statements about litigation between Google and Netlist, and Netlist and MetaRAM:
I did find some more information about the lawsuits between Netlist and MetaRAM, though I don’t know how up to date the following dockets are:
Netlist Inc. v. MetaRAM Inc.
U.S. District Court
District of Delaware (Wilmington)
CIVIL DOCKET FOR CASE #: 1:09-cv-00165-GMS
Assigned to: Judge Gregory M. Sleet
Cause: 35:271 Patent Infringement
Date Filed: 03/12/2009
Jury Demand: Both
Nature of Suit: 830 Patent
Jurisdiction: Federal Question
Metaram, Inc. v. Netlist, Inc.
U.S. District Court
California Northern District (San Francisco)
CIVIL DOCKET FOR CASE #: 3:09-cv-01309-VRW
Assigned to: Hon. Vaughn R. Walker
Demand: $0
Cause: 35:271 Patent Infringement
Date Filed: 03/25/2009
Jury Demand: Plaintiff
Nature of Suit: 830 Patent
Jurisdiction: Federal Question
All very intriguing Bill. I tend to agree with Buzzlord though. I don’t feel all this is Caffeine related. Hadn’t heard about this Google router before. Also intriguing.
I don’t mean to be throwing up old rumors, but the Google router rumor ran rampant last January. I haven’t heard much of anything about it since then though… The rumor was that Google was going to make routers to compete with the Juniper line of products. If Google is acquiring technology like this though – it could mean it is for real.
The thing that gets me is that when Google was making the Android OS, technology journalists kept asking if they would make a phone. Google’s response was always “We do not make hardware.” If you don’t make hardware at all… what would you need with patents for RAM?
All of you are missing the point. MetaRAM allows to “hang” a lot of GBs off each CPU. Server apps, which is what of interest to Google, need as much GBs as possible. When you run multiple OS’es on top of VMware, e.g., which is also a typical server configuration / application, you can easily gobble up TBs of RAM. Google has been designing its own server HW for at least 7 years. Rumor has it that its HW group is less than stellar. It is only logical that Google bought MetaRAM IP portfolio for dime a dozen. Netlist should shut up and pack up – not to pick a war with Google over this. They are trying to claim priority over what may be similar – if not identical – to JEDEC JC LR-DIMM work. Interestingly, Inphy has not been mention by anyone on this blog. That’s how clueless everyone is.
My guess is that former employees of MetaRAM have not changed their profiles because many have not landed new jobs yet.
I am agree with Bullaman, I thing it is all to increase the performance to face the upcoming challenges by bing and yahoo.
Netlist (NLST) has patent infringement cases against MetaRAM as well as Inphi.
Netlist has serious IP in “rank multiplication”, “embedded passives” (for freeing up space on memory modules for memory), heat dissipation that is even (so that there is more tolerance for using lower quality memory chips which are cheaper to use).
GOOG was using MetaRAM. Memory module makers were using MetaRAM, Intel was supporting it. It was the darling of the industry. Except that it was infringing on Netlist’s IP.
Then MetaRAM went out of business.
http://venturebeat.com/2008/08/19/idf-intel-gets-behind-start-up-metarams-server-memory-solution/
IDF: Intel gets behind start-up MetaRAM’s server memory solution
August 19, 2008
Google probably does not want to jeopardize it’s server operations to a “small” (compared to Google’s size) legal issue with Netlist.
Google and Netlist are in negotatiations to cobble together an agreement. That must have been hard while MetaRAM IP was not under Google’s belt (now that MetaRAM is gone).
This is probably why Google has had to buy MetaRAM IP, so it can get a better agreement with Netlist.
Inphi makes components – it makes an “iMB” buffer chip that it wants to sell to memory module makers. Inphi has no IP (intellectual property) in this area. It is probably hoping the memory module makers will deal with infringement issues. However Netlist has filed a suit against Inphi.
The memory module makers are waiting for JEDEC to put it’s foot down. Until that happens they probably have to wait.
Meanwhile Netlist is already manufacturing the 16GB HyperCloud memory.
From Google’s point of view, there is now no competitor to Netlist. Netlist’s IP is strong, and MetaRAM is not even a company anymore. Inphi is making components for memory module makers but holds no IP.
You can understand this probably makes Google jittery regarding the supply of these new memory modules, since Google is probably a big consumer of high-memory loaded computers. The availability of Netlist’s 16GB HyperCloud memory module allows it to double capacity without having to install additional servers (for memory-bound tasts, such as virtualization/cloud-computing).
As long as the legal issues do not get resolved, Inphi and memory module makers will be wary of who do partner with to build the memory needed by Google.
Meanwhile Netlist can provide that memory right now. Using it’s 16GB HyperCloud, Google can install 384GB of memory per server (doubling memory, with would otherwise require adding additional servers).
Netlist allows doubling of memory, reduction in power consumption (which can be a lot for a heavily memory-loaded machine) and speed improvements. By avoiding adding new servers, you cut power consumption (as well as UPS and generator power requirements for data centers).
What do you think Google will do ? It probably wants to resolve the memory use issue, so it can continue forward.
Hi Bullaman,
It does sound like Caffeine is independent, doesn’t it. Going to look for more on the router. 🙂
Hi Buzzlord,
I’m still puzzling out why Google decided to invest in 49 patent filings involving memory. Internal uses only? Maybe. Though the patent infringement lawsuits can make one wonder.
Hi InTheKnow,
Thanks for a very interesting comment.
It did seem that Google would be interested in using MetaRAM’s technology for their own hardware.
Interestingly, Inphy filed a patent infringement suite earlier today against Netlist. The press release they issued included some specifics:
I looked up the patents:
Programmable strength output buffer for RDIMM address register (7,307,863)
Abstract
Output buffer with switchable output impedance (7,479,799)
Abstract
Things seem to be heating up.
Hi humza,
It’s beginning to sound like a possibility that Google may have been using MetaRAM’s technology for a while. That would make some sense.
Hi netlist_follower,
Thank you for your insight on this topic – much appreciated. I didn’t check into the Netlist lawsuit against Inphy yet, but that sounds like a good next step.
Inphy does have a number of patents, and it seems that they are now claiming that the two I listed a couple of comments ago are being infringed upon by Netlist in their modules, including the 16 GB HyperCloud memory.
Interesting speculation as well on Google’s acquisition of MetaRAM’s intellectual property. I imagine that Google will be happy to resolve these issues. Now that they own all of those patent filings, I’m wondering what their next steps might be, other than pursuing their declaratory injunction and defending the suite brought by Netlist.
I have a very hard time believing that RAM was a significant enough bottleneck in their systems that they needed to go out and acquire a patent on new RAM technology — that just can’t be a cost effective way of increasing the power of your hardware.
I am going to go with the general opinion here that it does seem unlikely but we have confirmation that they are doing this and there will be a reason behind it.
I am far from an expert on patents and dont really know how the work and what level of protection they offer but surely there has to be some visibility on these things to challenge them prior to becoming law? I know from watching Dragons den that what offers protection in one country does not always work in another.
Googles veil of mist and secrecy is half the fascination, part of me (the cynic) always thinks PR….
Hi Bill,
Update on Netlist v Google litigation. After a hotly contested hearing on 11/12/09, the Hon. Armstrong issued an order dated 11/16/09 in favor of Netlist’s ‘386 patent claim construction. On 11/18/09 or so, Google changed the attorney.
On 11/24/09 in Netlist v MetaRAM joint case mgmt statement, MetaRAM disclosed additional comment that it “ceased operations, and prior to then sold only approximately $37,000 worth of DDR3 memory controllers subject to lawsuit. None of those memory controllers were used by MetaRAM’s customers in commercial sales, and instead all were destroyed.” In the following sentence, MetaRAM referenced Google v Netlist as related case. Actions speak louder than words. A reasonable inference is that MetaRAM has taken drastic action to reduce and limit any potential liability from alleged patent infringement. Can you guess the identity of the MetaRAM’s customer, and why $37,000 worth of non-commercial DDR3 memory controllers were destroyed?
In re Netlist v Inphi, my understanding is that Netlist’s IP portfolios are continuation of its earlier patents. In fact, Netlist received more patent(s) in November 2009.
IMHO, Netlist is a logical acquisition target for Google, CISCO, Intel or even Microsoft in 2Q/3Q of 2010. Then again, you never know about tech stocks.
I wonder if the Nov 12, 2009 court order has anything to do with NLST stock price rise starting Nov 11-12.
Google probably owns the fastest CPU’s and ram a server could hold but why upgrade? Maybe their trying to fix some server hardware errors, or they are up to something, since they started engineering a google phone why not try their hands on google routers, that would be cool!
Hypothetically speaking if Google is venturing out to manufacture servers to compete against Cisco, it would make sense for Cisco to acquire Netlist and Broadcom. That is assuming that either is up for sale. My understanding is that Netlist insiders hold 51% of its stock. They have the manufacturing plant in China to capitalize on this Hypercloud R&D investment. I will vote for the underdog to knock out the giant. Bigger they are, the harder they fall. I’ve got more DD to conduct to set entry level for Netlist and Broadcom.
Hi Bill,
Based on my reading of Inphi and Netlist patent history and portfolio, I agree with Netlist legal counsel’s opinon that Inphi’s retaliatory infringement claims have no merit. My research shows the following:
Netlist’ IP attorneys of record: Knobbe, Martin, Olson & Bear
7,289,386, patent date: October 30, 2007
Appl. No.: 11/173,175
Filed: July 1, 2005
Inphi’s IP attorneys of record: Koppel, Patrick, Heybl & Dawson
7,307,863, patent date: Dec. 11, 2007
Appl. No.: 11/195,910
Filed: Aug. 2, 2005
Inphi’s 2nd patent referenced in lawsuit
7,479,799, patent date: Jan 20, 2009
Appl. No.: 11/376,593
Filed: Mar. 14, 2006
Netlist’s ‘386 patent has also can claim benefit of its prior related patents.
7,289,386, patent date: October 30, 2007
Appl. No.: 11/173,175
Filed: July 1, 2005
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application is a continuation-in-part of U.S. patent application Ser. No. 11/075,395, filed Mar. 7, 2005, which claims the benefit of U.S. Provisional Application No. 60/550,668, filed Mar. 5, 2004 and U.S. Provisional Application No. 60/575,595, filed May 28, 2004. The present application also claims the benefit of U.S. Provisional Application No. 60/588,244, filed Jul. 15, 2004, which is incorporated in its entirety by reference herein.
Slice of Netlist’s patent summary and description is:
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
Various types of memory modules 10 are compatible with embodiments described herein. For example, memory modules 10 having memory capacities of 512-MB, 1-GB, 2-GB, 4-GB, 8-GB, as well as other capacities, are compatible with embodiments described herein. In addition, memory modules 10 having widths of 4 bytes, 8 bytes, 16 bytes, 32 bytes, or 32 bits, 64 bits, 128 bits, 256 bits, as well as other widths (in bytes or in bits), are compatible with embodiments described herein. Furthermore, memory modules 10 compatible with embodiments described herein include, but are not limited to, single in-line memory modules (SIMMs), dual in-line memory modules (DIMMs), small-outline DIMMs (SO-DIMMs), unbuffered DIMMs (UDIMMs), registered DIMMs (RDIMMs), fully-buffered DIMM (FBDIMM), mini-DIMMs, and micro-DIMMs.
. . .
Memory Density Multiplication
In certain embodiments, two memory devices having a memory density are used to simulate a single memory device having twice the memory density, and an additional address signal bit is used to access the additional memory. Similarly, in certain embodiments, two ranks of memory devices having a memory density are used to simulate a single rank of memory devices having twice the memory density, and an additional address signal bit is used to access the additional memory. As used herein, such simulations of memory devices or ranks of memory devices are termed as “memory density multiplication,” and the term “density transition bit” is used to refer to the additional address signal bit which is used to access the additional memory.
In certain embodiments utilizing memory density multiplication embodiments, the memory module 10 can have various types of memory devices 30 (e.g., DDR1, DDR2, DDR3, and beyond). The logic element 40 of certain such embodiments utilizes implied translation logic equations having variations depending on whether the density transition bit is a row, column, or internal bank address bit. In addition, the translation logic equations of certain embodiments vary depending on the type of memory module 10 (e.g., UDIMM, RDIMM, FBDIMM, etc.). Furthermore, in certain embodiments, the translation logic equations vary depending on whether the implementation multiplies memory devices per rank or multiplies the number of ranks per memory module.
My understanding is that Google, MetaRAM and Inphi has weak argument why they should not have to pay for “alleged” patent infringement. Equity and corporate ethics favor Netlist.
Please do your own DD and come to your own conclusion. Happy holidays . . .
Hi Inspirational,
I’m not sure that Google acquired MetaRAM’s intellectual property solely to increase the power of there technology, but I would say that memory modules like this would be helpful in getting rid of some bottlenecks.
Hi Jimmy,
The patent process is pretty involved, and does amount in a fair amount of scrutiny. Some patents require a fair amount of knowledge to understanding what they cover – I’m not going to begin to claim that I have enough of that knowledge when it comes to these patents focusing upon memory. 🙂
Hi Auditor,
Thanks for the updates. At this point, I’m wondering what kind of memory Google is actually using in their servers.
Would they build servers in competition against companies like Cisco? Funnier things have happened when it comes to a business developing technology and processes in response to a need, and finding that they have the possibility of a whole new revenue stream. It’s a little tempting to think that if Google were to want to go in that direction that they might start considering Netlist as an acqusition target. I’m not sure if that’s a possibility, or if it is feasible. I still find myself questioning Google’s motivations in acquiring MetaRAM’s IP, especially knowing about ongoing litigation.
Hi netlist,
There was also a drop in Netlist’s stock when the Inphy countersuit was announced, regardless of Netlist’s statements that the countersuit had no merit. I appreciate your extensive updates. Thank you. I’m going to have to find some time to look through all the references that you pointed towards this weekend.
Hi Mal,
I’m not sure that it’s safe to presume what Google is running on their servers at this point. One thing is for certain – there are a lot of lawyers involved in the thick of things here. It’s going to be interesting seeing how all of this plays out.
There’s something interesting here.
1. Google, Netlist then Metaram and Inphi are tangled up in lawsuits
2. Netlist stock shoots up and folks make money
Connected? I think Metaram was selling a lot of DDR2 memory to Google and others (for AMD servers) and about to sell DDR3 for Intel servers. There was a lot of press on Metaram and DDR2 in Feb 2008. Netlist threatens Google. Metaram gets hit with a lawsuit by Netlist. Google buys Metaram. Someone believes Netlist will settle and get the Google and other business. Smacks of insider trading. Search for CEO and board members for these companies and you get to Fred Weber (Metaram CEO) and Atiq Raza (board member) who worked together at AMD. Search on Raza and insider trading and you find he settled an insider trading suit. Remember Hector Ruiz at AMD?
Insider trading!
Bill and Auditor: I took a look at your postings. It seems there are 5 lawsuits:
You can see all the details via PACER of course, http://www.pacer.gov/
There’s a lot on Google v. Netlist. Not so much on the others. Your comments on the patent sale are interesting. The one MetaRam patent in the case against Netlist is 7,472,220. This patent is still assigned to MetaRam, but has a terminal disclaimer.
See the first page http://www.google.com/patents?vid=USPAT7472220
From http://www.freepatentsonline.com/help/item/Terminal-Disclaimer.html
“A binding statement made with the Patent Office in a case where more than one patent
has been obtained by the inventor on the same invention. The disclaimer will state
that the later patent will expire at the same time as the former patent and the later
patent will be enforceable only as long as both the patents are commonly owned.”
According to PAIR there are 3 patents that have to be commonly owned with 7,472,220:
http://portal.uspto.gov/external/portal/pair
11/461439 now US 7,580,312 (transferred according to your list)
11/524812 now US 7,386,656 (transferred according to your list)
11/584179 now US 7,581,127 (not on your list but assigned to Google according to the USPTO records)
There are currently 10 patents granted to MetaRam according to the USPTO, you listed 9, the last above is the extra one.
I will be interested to see how MetaRam and Google handle this with Netlist. If MetaRam doesn’t have rights to enforce this patent any more, they may have to drop their case against Netlist or perhaps Google has to take over. Then interesting things may happen.
Do either of you have any more information?
IP Agent and Bill,
Above referenced MetaRAM patent application disloses Netlist patent work under Bakhta that was filed in 2005. Since Netlist claims Hypercloud is interoperable, can it work for Cisco legacy servers and routers without upgrading to new CPU? Has neutral or OEM eval/review been conducted on Hypercloud? Can Hypercloud be reconfigured for laptop or desktop use?
Hi IP Agent,
Thank you very much for your followup on this. When I originally checked at the USPTO assignment database. I did only see 49 granted patents and patent applications, but now I’m seeing 50. US 7,581,127 is on my list above (fourth one down), but I didn’t include 7,472,220, and I’m not sure why.
There’s a new patent application now listed in both lists as well, 20090290442, which wouldn’t have shown up in either search since it wasn’t published until November 26th, but it was assigned to Google on November 18th as well (Unpublished patent applications aren’t displayed in the assignment database at the USPTO). That explains why there are now 50 showing, instead of 49.
It’s possible that when I searched in the Assignment Database on MetaRAM, I looked in the “Assignor Name” field instead of the “Assignee Name:” field, which would have meant that I would miss 7,472,220, since it doesn’t appear to have been assigned to Google.
What’s odd is that the USPTO assignment database now lists 50 granted patents and patent applications as having been assigned to MetaRam, and 50 granted patents and pending patent applications as being assigned by MetaRam, and and they aren’t the same 50. I’m going to have to check on why there is a mismatch.
The granted patent you’ve pointed out is:
Interface circuit system and method for performing power management operations utilizing power management signals.
Interesting. Thank you.
Interesting patent info.
auditor:
NLST claims HyperCloud will work like regular memory. So it should work on desktops. NLST probably isn’t making it for laptops because the form factor maybe different for laptop memory (which may or may not allow for extra circuitry). Also laptop may not be the ideal market for pushing this.
CSCO’s UCS strategy is essentially neutered by NLST HyperCloud – the difference is CSCO puts the ASIC on the motherboard while NLST puts it on the memory module itself. NLST also uses some other technologies like Planar-X and “embedded passives” to give it more space on the memory module (don’t know much about that).
NLST HyperCloud – as far as I have understood it – seems to allow greater memory density, greater memory speed and energy efficiency.
It seems as you load memory (electrical load issues ?) the achievable speed goes down. So on heavily memory-loaded systems you have the memory, but the achievable speed is not giving you the bang for the buck.
NLST HyperCloud makes the processor think less memory is on board and it runs it at full speed.
Repeating some references from above:
http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in HPC, 11th November 2009 18:01 GMT
…
The one cost that Duran did not calculate was savings in power and cooling, but the HyperCloud memory burns under 10 watts for a 16GB module, and in general, for a given capacity, a HyperCloud module will burn 2 to 3 watts less than a standard DDR3 module. And because HyperCloud memory can run at the full 1.33GHz speed, regardless of the capacity in the box, there should be a sizeable performance boost on applications that are sensitive to memory bandwidth – maybe as high as 50 per cent, says Duran.
NLST HyperCloud presentation:
http://www.scribd.com/doc/22814075/Hyper-Cloud-Press-Presentation-11-4-09
quote:
magnification. The Netlist chips “Isolation devices†are different sizes, one is really squashed. The only explanation I can see is that the chip images have been pasted in using PhotoShop. Why would Netlist do that if they had real modules to demonstrate?
NLST demonstrated at Supercomputer Expo 2009. Here is what HP VP had to say:
http://www.prnewswire.com/news-releases/netlist-demonstrates-new-hypercloud-memory-modules-at-supercomputing-09-70174702.html
Netlist Demonstrates New HyperCloud Memory Modules at Supercomputing 09
Showcases interoperability between standard JEDEC server memory solutions and HyperCloud modules
To showcase its 2-vRank HyperCloud modules, Netlist is using industry standard servers, such as the HP ProLiant DL380, demonstrated in the following configurations:
* 8GB and 16GB 2 vRank DDR3 RDIMM functionality
* Three 2 vRank modules per channel
* 1333 Mega Transfers per second (MT/s)
* Interoperability with standard JEDEC DDR3 modules
* Interoperability with different RDIMM capacities
“Customers running memory intensive computing environments, such as virtualization, cloud computing, and HPC applications, are often limited by memory bottlenecks in their servers,” said Mike Gill, vice president, Industry Standard Servers Platform Engineering at HP. “The Netlist technology on HP industry-standard servers increases server memory capacity and bandwidth to enhance application performance in converged infrastructures.”
Here is the JEDEC standard that Inphi wants to sell chips for, which JEDEC is mulling over, and whose final standard memory module makers are awaiting before they start using Inphi chips. However this is infringing NLST intellectual property. So until JEDEC resolves this, memory module makers and Inphi are stuck.
http://www.simmtester.com/PAGE/news/showpubnews.asp?num=167
What is LR-DIMM , LRDIMM Memory ? ( Load-Reduce DIMM)
Tuesday, October 13, 2009
Here is an example of MU waiting for buffers:
http://www.micron.com/products/modules/lrdimm/index
LRDIMM
quote:
But because end quality is dependent on more than just
reliability, we’re also working closely with buffer suppliers and server
OEMs to ensure that our LRDIMMs function well with multiple server
platforms.
HP has it’s own issues with CSCO’s UCS strategy (which NLST HyperCloud neuters).
http://www.cnbc.com/id/33865963
HP’s Shot Across Cisco’s Bow
Published: Wednesday, 11 Nov 2009 | 5:09 PM ET
By: Jim Goldman
CNBC Silicon Valley Bureau Chief
Earlier this year, Cisco [CSCO] opened a major front against one-time partners Hewlett-Packard [HPQ] and IBM [IBM] in the hotly competitive, and fast-growing server market with its blade, and so-called Unified Computing System initiative. The competition, and headlines it generated, become so intense so quickly that Cisco even posted a blog entitled “Is HP Now a Friend or Foe of Cisco?”
http://www.reuters.com/article/marketsNews/idCNN1228359720091113?rpc=44
HP still seen looking for deals after 3Com
Thu Nov 12, 2009 8:49pm EST
* Competitive pressures rising in tech with flurry of M&A
* Analyst say HP could pursue more networking deals
* Storage, software also seen as attractive to HP
* Brocade shares plunge 13 pct, unlikely target after 3Com
By Gabriel Madway
But why is Netlist “faking” pictures in a press presentation made so recently? Doesn’t seem right to me. End.
quote:
Look at page 9 of the above Netlist HyperCloud presentation using Adobe viewer. Title is HyperCloud 2 vRank DDR3 RDIMMs. There is a picture of the Netlist memory module. Zoom in on the bottom of the memory module. Look near the notch in the connector at the bottom. Go to about 500% (times 5) magnification. The Netlist chips “Isolation devices†are different sizes, one is really squashed. The only explanation I can see is that the chip images have been pasted in using PhotoShop. Why would Netlist do that if they had real modules to demonstrate?
Yes, you are right. The “buffer”-like chip to the left of the notch does look slightly smaller.
However note there are already 8 “buffer” chips there (which seems like a canonical figure if the “buffers” are for data lines). The smaller one to the left of the notch is the 9th. It might be an odd one out i.e. used for something else – maybe control signals or something like that.
After all there is no assertion that the chips are the same.
Hi Roomy Khan,
Interesting choice of names to use to post with here. I’m not sure who knows what about the inner workings of each of the companies involved, though it does appear to be a mess. I would guess that it would be unavoidable for some of the people involved in these companies not to know each other, or to have worked together, but there is the potential that something unusual might be going on. Just a quick disclaimer on my part – I own no stocks in any of the companies mentioned in this post, and in the comments to the post. 🙂
Hi auditor,
The engineering and interoperability of memory modules goes a bit outside the area of my expertise. Thankfully, netlist was able to answer your questions on that topic.
Hi netlist,
Thanks for answering auditor’s and engineer’s questions. I still have some catchup reading with the links that you’ve posted. (And I’m still wondering why Google purchased MetaRAM’s IP.)
On Dec 4, 09, Netlist brings separate suit against Google for infringing on Netlist USPTO patent 7,619,912, entitled “Memory Module Decoder” issued on Nov 17, 2009 based on patent application in mid-2005. In para 9, Netlist alleges that Google infringed on the ‘912 Patent including its use of the 4-Rank Fully Buffered Dual In-Line Memory Modules (4-Rank FBDIMMs) in its server computers.
This new Complaint significantly advances Netlist’s claims and rights against Google, because this suit comes after having examined Google’s server after winning discovery ruling from that (Google v Netlist) Court authorizing Netlist to inspect Google server despite Google’s strong objections.
Netlist Relief for Prayer includes temporary and permanent injunctive relief, and “treble damages” for unlawful practices of Google characterized as “willful and deliberate”.
This Complaint reads: “The ‘912 Patent is directed to memory modules with a logic element that overcomes computer system limitations that would otherwise constrain the memory module architectures with which the computer system can operate. As a result, the claimed memory modules effectively increase the memory capacity and improve the energy efficiency of the computers in which they reside. Netlist is the owner of the entire right, title, and interest in and to the ‘912 Patent. A true and correct copy of the ‘912 Patent is attached hereto as Exhibit 1.” Reference Case3:09-cv-05718-EMC Filed 12/04/09
From NLST’s complaints, and GOOG’s testimony, it seems to suggest GOOG is not just an innocuous buyer of memory from MetaRAM or some such infringer.
But GOOG seems itself to be a major party involved in issues of memory design (something which other posts here seem to suggest as well – that they had a hardware design group for such things).
NLST’s complaint includes use by GOOG of 4-rank FBDIMMs and inducing others to sell such stuff (maybe MetaRAM ?).
Patent in question is:
http://www.freepatentsonline.com/7619912.pdf
Just some comments on looking through the court dockets in GOOG suit against NLST.
In the previous GOOG suit against NLST, GOOG and NLST have a settlement conference in August 2010. They will probably have to thrash out an agreement before then.
In looking through court documents one can see that NLST has got access to a GOOG server (after GOOG protestations).
The protocol to be followed by NLST is outlined in:
JOINT INSPECTION PROTOCOL AND [PROPOSED] ORDER
NLST gets to inspect FBDIMMs – AMB buffer manufacturer, use of “Mode C” and non-Mode C, power consumption, replace with standard FBDIMMs, monitor thermal stuff, take max of 20 photographs (“Attorney’s Eyes Only”). Inspected at GOOG lawyer’s offices (Fish and Richardson).
GOOG is saying it doesn’t contest that it is using FBDIMMS in Mode C.
It’s argument perhaps is that the NLST IP is faulty.
NLST removed Morrison and Foerster and replaced by Pruetz Law Group (which is a small outfit supposedly very good for IP).
From the docket item – “AMENDED JOINT CASE MANAGEMENT CONFERENCE STATEMENT AND [PROPOSED] ORDER”:
NLST inventors Jayesh Bhakta and Jeffrey Solomon have testified.
GOOG employees under spotlight:
Rick Roy – “involved in the development of the accused 4-rank FBDIMMs and who participated in meetings with Netlist concerning it’s patented technology”
Andrew Dorsey – same as above
Rob Sprinkle – same as above
GOOG’s main argument maybe presented in this docket item:
[REDACTED] GOOGLE INC.’s RESPONSIVE CLAIM CONSTRUCTION BRIEF
Earlier Judge Armstrong denied GOOG request to include NLST ‘386 patent “prosecution history” (at the Patent Office I presume):
As auditor reported above:
Update on Netlist v Google litigation. After a hotly contested hearing on 11/12/09, the Hon. Armstrong issued an order dated 11/16/09 in favor of Netlist’s ‘386 patent claim construction. On 11/18/09 or so, Google changed the attorney.
The Nov 12, 2009 order states:
THIS COURT HOLDS THAT pursuant to Markman v. Westview Instruments, Inc., 52 F.3d
967, 980 (Fed. Cir. 1995) aff’d, 517 U.S. 370 (1996) and this Court’s Standing Order at Paragraph
10, because the ‘386 Patent’s prosecution history is not in evidence and not addressed in either
parties’ claim construction papers, Netlist’s objection is sustained.
IT IS HEREBY ORDERED THAT Google and its counsel shall not present, refer to,
comment upon, introduce or use in any way, the ‘386 Patent’s prosecution history in its claim
construction presentation.
From docket item # 27:
GOOG does not dispute it is using “Mode C” in it’s FBDIMMs.
From docket #27:
NLST wants to see GOOG servers so it can verify they are using what JEDEC refers to as “Mode C” to make it seem like there are fewer ranks of memory than actually are on the memory module.
From GOOG’s account, they say in early 2006, GOOG was looking for manufacturers and tester of it’s FBDIMMs. It discussed with various companies (including NLST).
They signed NDAs to see GOOG’s FBDIMM design.
“GOOG does not dispute that it’s FBDIMMs operate in Mode C” ..
Earlier GOOG had said (docket #33 documents) that it may call Desi Rhoden (Exec. VP of Inphi) to explain rank etc.
That witness maybe less of an impartial “expert” after NLST vs. Inphi.
The impression one gets from the docket info in GOOG’s suit against NLST, is that GOOG was not just a customer of MetaRAM, but an active developer of memory, and that it was a user of privileged information that NLST gave to GOOG when NLST proposed new memory for GOOG.
Whether that info leakage had any link back to MetaRAM (through GOOG) is another story.
Hi Auditor,
From what little I’ve seen online regarding the injunctive relief requested on that suit, in part it asks for Google to stop using servers that use memory that infringes upon the Netlist patent. I don’t know how many servers Google uses that might include the memory modules in question, but if that is granted, it could possibly be a harsh blow to Google. Have to see if I can find a copy of the complaint.
Another point of view of Netlist vs Google, notes the lobbying by Google asking that changes be made in current patent law. One would wonder if Google understands that it has a weak case against Netlist. Not to say that the efforts of Google to have laws changes are made wholely against Netlist, but companies like Netlist. I would hope for the sake of inovation that small companies like Netlist are not stripped of their ability to bring new ideas to market and be rewarded for their efforts.
The last post was a bit haphazard – just a quick run through the filings.
In answer to the potential question – “does NLST HyperCloud really work”, I guess we have confirmation from GOOG’s extensive use of “Mode C” in it’s servers.
This is something which has emerged from discovery in GOOG vs. NLST (the case which GOOG brought – no monetary damages, but to be left alone by NLST).
GOOG acknowledges use of “Mode C”. This means their thrust will primarily be on claiming non-validity of NLST IP. However NLST IP goes back to March 2004 (according to NLST filings) on the ‘386 patent. This predates MetaRAM IP (which GOOG has now bought in a panic to ensure that a MetaRAM loss while lax in bankruptcy does not wind up screwing GOOG in it’s own case).
I am not sure about the relationship between GOOG and MetaRAM – it is possible that the “other manufacturer” that GOOG used to manufacture was MetaRAM (?). There are few other players in this space – MetaRAM is dead, and Inphi is a generic component manufacturer and holds little IP – and they are awaiting JEDEC FBDIMM “Mode C” proposed standard results before module makers decide what to do.
In GOOG vs. NLST (the case which GOOG brought in order to get relief), GOOG has had to furnish it’s server – which has led to discovery has led to the recent NLST vs. GOOG lawsuit (which refers to another NLST patent as well). GOOG has lost claim that NLST patent history should be examined in the proceedings.
This seems to be the status thus far.
Dec 17, 2009 – the GOOG vs. NLST and more recent NLST vs. GOOG (concerning another patent infringement that NLST alleges following discovery in GOOG vs. NLST) have now been consolidated.
Evidently both NLST and GOOG wanted the two cases to be combined together. Accordingly new court dates have been set.
Hi spencity,
I have seen other places where Google had been asking for patent reform, before any hints of this litigation came out. The patent process itself does seem to be much more difficult for smaller businesses. I don’t know enough about the facts behind the Google/Netlist ligitation to decide the strength of their case, but more seems to be coming out.
Hi netlist,
I’m still not certain of the relationship between Google and MetaRAM at this point either. I did notice a few more patent filings assigned from MetaRAM to Google a week after the initial batch of assignments.
I do appreciate the updates. Consolidating the cases does make sense, just on the basis of cost and judicial economy themselves. This is getting pretty interesting.
Hi Bill Slawski,
It’s interesting that Netlist did not react with a knee jerk response to Google’s court filings earlier this year by immediately counter suing. Instead Netlist chose to first pursue legal action against Inphi. It makes since to attempt to prove the legitimacy of patents in question against the smaller company in order to arm itself with court tested evidence. It will much more difficult for Google to claim that the patents in question are invalid.
If “Mode C” usage shows up in a random GOOG server, what are we talking about here ? That nearly all GOOG servers use the infringing “Mode C” ?
Since “Mode C” is a smoking gun for use of 4-rank/virtual-rank – as it seems to have BIOS report (incorrect) info to the processor so it can be fooled.
This would mean a lot more servers than claimed – MetaRAM (or was it GOOG – anyone know who said that ?) said there were only a “few” such infringing products manufactured. Does not seem like a few if all GOOG servers are tainted as the server displayed in discovery phase (for GOOG vs. NLST) was ?
I am not totally clear on this, but it seems “Mode C” is related to having (patched) BIOS report the (incorrect) memory info to the processor.
Some info on JEDEC’s FBDIMM Mode C proposed standard
http://www.jedec.org/download/search/JESD82-20A.pdf
http://www.jedec.org/download/search/JESD82-28A.pdf
Of course, this seems not to be required by the newer NLST HyperCloud memory – which is supposed to work along with other memory in unaltered motherboards.
But “Mode C” usage is indicative of an attempt to do 4-rank and so is a “smoking gun”.
thank you for this page. Got good info. top question on my mind:
1. why did metaram shut down ? Couldnt find anything. (my conspiracy theory:
Did the VCs find out the company was based on a stolen IP ? (metaram established in mid 2006, nlst patent – 7289386 filed mid 2005)
2. Is Goog collecting metaram patents to have bargaining/negotiating power with nlst ? (way patents are written there is always
room to overclaim/underclaim patent coverage, althought attorneys try to make it as broad as possible).
Hi spencity,
While the timing of filing lawsuits may have a strategy to them, I’m not sure that we can read too much into that timing sometimes. With a certain amount of time to file some claims under statutes of limitations, and other times dictated by court rules, someone filing a claim or counterclaim may not always be free to file a case in court exactly when the idea time might be to do so.
It is interesting that Netlist did first file a suite against Inphi, though.
Hi netlist,
I’ve been wondering how many Google servers might be using “Mode C” as well, and if they might be affected by the outcome of a settlement or judgment.
Hi Mike,
You’re welcome. I’m wondering the same things myself. It really was a total surprise to see all of those patent filings assigned to Google. I didn’t realize at the time that there was a hornet’s nest of litigation to go with them.
We don’t know for certain what kinds of memory modules Google uses in its servers, but a recently published study from Google on DRAM errors doesn’t mention any modules of more than 4 GB. The paper does mention that data collected from the study “covers multiple vendors, DRAM capacities and technologies, and comprises many millions
of DIMM days.”
In that paper, we’re told this about Google’s systems:
The paper is:
DRAM Errors in the Wild: A Large-Scale Field Study (pdf)
It was written by Bianca Schroeder from the University of Toronto, and Google’s Eduardo Pinheiro and Wolf-Dietrich Weber. It was presented at SIGMETRICS/Performance’09, June 15–19, 2009, in Seattle, Washington.
quote:
I’ve been wondering how many Google servers might be using “Mode C†as well, and if they might be affected by the outcome of a settlement or judgment.
The court asked them to show a “GOOG server” and they show the one with “Mode C” in it. In court filings, GOOG has not mitigated impact by saying “only a few servers are implicated”. Instead they have said they are not denying use of “Mode C”, but the value of NLST’s IP.
However this is a risky tactic, as the cost of failure would be high (and possibly unacceptable) for GOOG. Which means settlement. The GOOG vs. NLST lawsuit GOOG filed in reply to NLST letter to GOOG may just have been that – to allow them time for a soft landing, esp. if they did not have good answer to NLST letters.
Does GOOG throw away old servers and continue replacing, or is the error rate such that they wind up replacing them anyway after some months ?
MetaRAM link with GOOG is unclear. My impression was it was MetaRAM which sold “infringing” modules to GOOG (NLST vs. MetaRAM).
Now it turns out GOOG was the sponsor with component specs and seeking someone to manufacture according to GOOG specs (GOOG vs. NLST court dockets).
Since MetaRAM (NLST vs. MetaRAM) claims a very small amount of sales, that would not account for the proliferation of “Mode C” in standard GOOG servers (the one GOOG showed when forced by discovery in GOOG vs. NLST). Also MetaRAM says they were not “sales” and were “destroyed”.
Question is – why would they “destroy” that hardware ?
From auditor post above:
On 11/24/09 in Netlist v MetaRAM joint case mgmt statement, MetaRAM disclosed additional comment that it “ceased operations, and prior to then sold only approximately $37,000 worth of DDR3 memory controllers subject to lawsuit. None of those memory controllers were used by MetaRAM’s customers in commercial sales, and instead all were destroyed.†In the following sentence, MetaRAM referenced Google v Netlist as related case. Actions speak louder than words. A reasonable inference is that MetaRAM has taken drastic action to reduce and limit any potential liability from alleged patent infringement. Can you guess the identity of the MetaRAM’s customer, and why $37,000 worth of non-commercial DDR3 memory controllers were destroyed?
The article below suggests volume, power and cost will be hard to reduce. However NLST claimed just that with HyperCloud at Supercomputer Expo – that is, memory density, speed increase (for heavily memory-loaded systems which otherwise have to run at slower speeds), power reduction (since 4-rank may allow you to reduce power for “inactive” memory modules), and ability to present more memory in total than otherwise would be handleable.
The article below suggests it is a complex thing to get right – it is possible that MetaRAM was not able to get enough space on memory module for enough “decoupling capacitors” etc.
http://lynnesblog.telemuse.net/292
Feb 25, 2008
MetaRAM Busts RAMBUS Stranglehold?
Snake oil or salvation from former AMD CTO,
By Lynne Jolitz
…
Is the technology innovative? Not likely — it sounds like a combination cache and bank decoder, which is not innovative in the least. In fact, you need 4x the number of components on the DIMM, which means 4x the number of current spikes and decoupling capacitors, even if you put the chips together in the same package. Because you have a fifth chip, you complicate things even more. There is no way you can approach the triple-zero (volume, power, cost) sacred to chip designers with such a design, because one single high-speed high-capacity chip will eventually win out given the proliferation of small expensive gadgets demanding the lowest of volume and power. In a world of gadgets like IPODs, cellphones, laptops, PDAs and the like, cost is very important but *not* the most important quantity. So RAMBUS doesn’t have a lot to worry about here.
…
So where does little MetaRAM come in. When technology fails, maybe a clever business model will do. MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it? Given reliability considerations, that also remains to be seen. But the moral of this saga is simple — human memories are longer than memory architectures in this business, and the real puppet-master behind the throne (Kleiner-Perkins) is sure to walk away with the money. I wish I could say the same for the customers.
http://mobile.chipcrunch.com/Blogs/Startup.Blurbs/Semiconductor.startups.dropping.like.flies.html
Semiconductor startups dropping like flies
Written by Maciej Bajkowski
Tuesday, 14 July 2009
…
We profiled MetaRAM in March of last year, shortly after the company emerged from stealth mode. It was backed by several prominent venture capital firms including: Kleiner Perkins Caulfield & Byers, Khosla Ventures, Storm Ventures, and Intel Capital. This just shows you that having prominent VC backing is not a guaranteed indicator of success. Already back then we had a couple of concerns regarding the MetaRAM technology: First, with increasing DRAM frequency, how long would MetaRAM be able to hide the latency of their chipset via clever buffering of reads and writes? Second, it was inevitable that memory controllers would enable support for ever larger amounts of memory, possibly making MetaRAM technology irrelevant? Whether any of these was the actually reason for the company ceasing operations we might never know. The company’s website seems to be down, and as far as I’m aware nobody has been able to reach any of the company representatives for an official comment.
Just as Inphi (with it’s “iMB” buffer) is now hoping for JEDEC approval and then use by memory module makers, similiarly MetaRAM was hoping to sell the chipset (and make the memory itself also it seems).
Inphi also had a press release about the “iMB” buffer for Supercomputer Expo. What is not clear is if they actually got it working – since Inphi just sells a buffer chip component.
Since NLST seems to be claiming 4-rank (and it IS the inventor of 4-rank) then why has it not gone against memory module makers of 4-rank modules before ?
http://www.cmtlabs.com/quadfbdimm.asp
The Memory Compatibility Experts
“Quad-Rank Fully Buffered DIMMs”
Or is it that NLST has targeted the buffer chip manufacturers (MetaRAM, and now Inphi) ?
Hynix and SMOD were banking on MetaRAM at that time:
http://www.digitimes.com/news/a20080820PR200.html
Hynix demonstrates DDR3 R-DIMM using MetaRAM technology at IDF
Press release, August 20; Esther Lam, DIGITIMES [Wednesday 20 August 2008]
…
Hynix using MetaRAM “chipset” – MetaRAM memory module has Hynix logo on it (page 10):
http://www.ansoft.com/ie/Track2/DDR3%20Memory%20Module%20Design.pdf
http://www.epn-online.com/page/new56803/smart-launches-8gb-dual-rank-ddr2-rdimms.html
SMART launches 8GB dual-rank DDR2 RDIMMs
04/03/2008
…
The new module combines SMART’s new DDR2 packaging technologies with the MetaRAM chipset architecture.
I am trying to understand how GOOG use of “Mode C” is an indicator of infringement. Is use of 4-rank infringement ?
NLST was the originator of 4-rank. Yet it was made into a JEDEC standard.
Anyone know the history of how that worked.
But 4-rank is a JEDEC standard – if NLST was the innovator of 4-rank, how did that become standard ?
Does this mean NLST disapproves of it – including all the other manufacturers who make it ?
But lacking legal resources it is only going after a few players first ?
Bill Gervasi (now at SimpleTech) was at NLST at the time of 4-rank development.
He was also Chairman of JEDEC committee on memory modules.
http://www.docmemory.com/page/news/showpubnews.asp?title=What+is+a+4-Rank+DIMM+Memory+%3F&num=128
For a successful implementation of 4 Rank DIMM memory, System designers need to be aware of which processors and memory controllers are
enabled to support four-rank modules. Finally, it is necessary to note that byte five of the serial presence detect (SPD) describes the number of ranks on
a module
Many system designers are now are rushing to find out what “4 rank memory†is all about ?. We have the pleasure to introduce Bill Gervasi, the
inventor/initiator of the “4 rank memoryâ€, to furthur explain the details technical details regarding 4 Rank DIMMs.
4 rank modules, recently approved by JEDEC, address this gap by allowing up to 72 DRAMs per memory slot, enabling the 32GB per CPU
capacity goal using commodity 512Mb DRAMs. When the 1Gb DRAMs are finally in mass production, 4 rank modules double the reach again
to 64GB per CPU.
quote:
1. why did metaram shut down ? Couldnt find anything. (my conspiracy theory:
Yes, not clear to me either (first article below). It could have been:
– the semiconductor slump (low memory prices) of that time
– serious issues with the technology not working well
– patent issues (or realization that NLST had the earlier IP, or a more comprehensive set of related IPs – for example in “embedded passives” – which would allow successful use
NLST has IP in “embedded passives” which frees up real-estate on the memory module. In addition MetaRAM has IP in “stacked modules” which NLST has criticized for it’s inability to deliver symmetric lines to memory chips:
http://www.netlist.com/technology/technology.html
While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency. Stacking components results in unequal cooling of devices, causing one device to run slower than the other in the stack. This often results in module failures in high-density applications.
The density limitation is solved by proprietary board designs that use embedded passives to free up board real estate, permitting the assembly of more memory components on the substrate. The performance of the memory module is enhanced by fine-tuning the board design to minimize signal reflections, noise, and clock skews.
This is a presentation by NLST’s Bill Gervasi on NLST’s “embedded passives” (who went on to SMOD and Chairman JEDEC DRAM Packaging Committee):
http://www.discobolusdesigns.com/personal/IMAPS_netlist_embedded_resistor_reliability_20050125.pdf
NLST’s new HyperCloud memory modules are pictured in this presentation (pg. 9):
http://www.scribd.com/doc/23156890/Hyper-Cloud-Press-Presentation-11-24-09New
Hyper Cloud Press Presentation 11-24-09New
Date Added 11/25/2009
Compare that to:
MetaRAM’s modules – they do seem a bit cluttered (with possibly asymmetrical chip layout ?):
http://www.ansoft.com/ie/Track2/DDR3%20Memory%20Module%20Design.pdf
Inphi’s “iMB” buffer and a possible memory module:
LR-DIMM with Inphi’s iMBâ„¢ Component
http://www.inphi.com/images/productImageLibrary/highRes/Inphi_LR-DIMM_with_iMB_Component_gold.jpg
The article below suggests volume, power and cost will be hard to reduce. However NLST claimed just that with HyperCloud at Supercomputer Expo – that is, memory density, speed increase (for heavily memory-loaded systems which otherwise have to run at slower speeds), power reduction (since 4-rank may allow you to reduce power for “inactive” memory modules), and ability to present more memory in total than otherwise would be handleable.
The article below suggests it is a complex thing to get right – it is possible that MetaRAM was not able to get enough space on memory module for enough “decoupling capacitors” etc.
Feb 25, 2008
MetaRAM Busts RAMBUS Stranglehold?
Snake oil or salvation from former AMD CTO,
By Lynne Jolitz
…
Is the technology innovative? Not likely — it sounds like a combination cache and bank decoder, which is not innovative in the least. In fact, you need 4x the number of components on the DIMM, which means 4x the number of current spikes and decoupling capacitors, even if you put the chips together in the same package. Because you have a fifth chip, you complicate things even more. There is no way you can approach the triple-zero (volume, power, cost) sacred to chip designers with such a design, because one single high-speed high-capacity chip will eventually win out given the proliferation of small expensive gadgets demanding the lowest of volume and power. In a world of gadgets like IPODs, cellphones, laptops, PDAs and the like, cost is very important but *not* the most important quantity. So RAMBUS doesn’t have a lot to worry about here.
…
So where does little MetaRAM come in. When technology fails, maybe a clever business model will do. MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it? Given reliability considerations, that also remains to be seen. But the moral of this saga is simple — human memories are longer than memory architectures in this business, and the real puppet-master behind the throne (Kleiner-Perkins) is sure to walk away with the money. I wish I could say the same for the customers.
http://mobile.chipcrunch.com/Blogs/Startup.Blurbs/Semiconductor.startups.dropping.like.flies.html
Semiconductor startups dropping like flies
Written by Maciej Bajkowski
Tuesday, 14 July 2009
…
We profiled MetaRAM in March of last year, shortly after the company emerged from stealth mode. It was backed by several prominent venture capital firms including: Kleiner Perkins Caulfield & Byers, Khosla Ventures, Storm Ventures, and Intel Capital. This just shows you that having prominent VC backing is not a guaranteed indicator of success. Already back then we had a couple of concerns regarding the MetaRAM technology: First, with increasing DRAM frequency, how long would MetaRAM be able to hide the latency of their chipset via clever buffering of reads and writes? Second, it was inevitable that memory controllers would enable support for ever larger amounts of memory, possibly making MetaRAM technology irrelevant? Whether any of these was the actually reason for the company ceasing operations we might never know. The company’s website seems to be down, and as far as I’m aware nobody has been able to reach any of the company representatives for an official comment.
netlist, thank you for sharing all of your research and reasoning on this thread. your efforts and generosity are very much appreciated.
And Bill, thank you for starting this thread about Google, MetaRAM, the patents, and the lawsuits. This is the best thread of information about Netlist that I know of. Cheers.
Happy New Year to all, and may all Netlist investors prosper.
i just reread the entire thread. want to thank auditor too, and everyone else who contributed to this thread. didn’t mean to take anyone for granted. thanks all, very helpful info & discussion.
An update on the various court cases.
Looks like NLST vs. Inphi (and retaliatory Inphi vs. NLST) are on track.
GOOG vs. NLST and NLST vs. GOOG (inspired by discovery in GOOG vs. NLST) have been consolidated (request of both GOOG and NLST) – both to be heard by Judge Armstrong.
NLST extended time to GOOG to answer to complaint by Jan 29, 2009.
Meanwhile, NLST vs. MetaRAM (and retaliatory MetaRAM vs. NLST – although MetaRAM does hold some IP, unlike Inphi) have both been retracted by both parties.
Since MetaRAM is in bankruptcy, it would want to end the case – in any case MetaRAM vs. NLST wouldn’t have much meat if they no longer own the patent they are asserting (though perhaps they could still assert harm caused by NLST while MetaRAM owned those patents).
NLST probably can’t get much from a bankrupt MetaRAM – although they MAY have been able to block the transfer of IP from MetaRAM to GOOG (since NLST had potential recoveries to make from MetaRAM estate in case of win against them for infringement).
So is this related to a gradual “understanding” in the NLST vs. GOOG case – not necessarily for settlement, but for how the case should proceed (as usually happens between two opposing legal teams – i.e. they agree on what terms the fight will proceed).
Reasons why NLST would retract case against MetaRAM
– removed MetaRAM vs. NLST (minor inconvenience that it maybe)
– reduces court costs and whittles away nonessentials (since moral victory against MetaRAM less interesting than against still healthy GOOG or Inphi) – plus same boutique lawyer team handling all cases (with allied legal firm as well)
– having retraction by MetaRAM may help them slightly in fight against GOOG (to neutralize GOOG use of MetaRAM-like arguments – since GOOG now holds MetaRAM’s IP).
Reasons why MetaRAM (while privately held shares, still a limited company ?) would retract case
– is in bankruptcy – limited options
– no real case retaliatory case against NLST (esp. true if MetaRAM folded partially because of that understanding – that they had weak hold on IP)
Does anyone know the answers to any of these questions?
1)Why did Google originally decline to use Netlist’s product, and instead order products from MetaRAM?
2)Why did MetaRAM declare bankruptcy, and are they planning to emerge from bankruptcy and continue as a private, limited company? If so, what will their business be?
3)Google is claiming that Netlist’s patents are “invalid”. In what way? What evidence or reasoning supports this argument?
4)Reportedly, neither Google nor Inphi are seeking monetary damages from Netlist, but Netlist is seeking monetary damages from Google and Inphi. Does this fact suggest that Netlist has the stronger cases against Google and Inphi?
5)When is it likely that Netlist’s new product “Hypercloud” will complete trials by OEMs, be approved and certified, be ordered in great volumes, and start generating significant earnings for Netlist?
6)How might Netlist be negatively affected by adverse judgments in the two court cases, and by the JEDEC committee’s impending decision on memory product standards?
7)If Google loses or settles the case with Netlist, is Google likely to become a paying customer of Netlist?
My thanks to anyone for their thoughts on, or answers to, these and related matters.
Two more questions and thoughts:
8)If MetaRAM is planning to emerge from bankruptcy and continue as a private company, why would they sell their many patents to Google (and be left with no IP) unless their patents do in fact infringe on Netlist’s patents, and are more of a liability than an asset going forward?
9)If MetaRAM’s many patents do infringe on Netlist’s patents, why did Google quickly buy them all from a bankrupt MetaRAM? If MetaRAM’s patents infringe on Netlist’s patents, they should be useless to Google as a legal defense in the court case with Netlist, as bargaining leverage with Netlist, and as a basis for Google or its contractors to manufacture memory products as an alternative to, and competitor against, Netlist’s memory module solution for servers.
Since Netlist sued MetaRAM over MetaRAM’s patents allegedly infringing on Netlist’s patents, Google must know that Netlist will sue Google if Google ever tries to use MetaRAM’s patents to manufacture memory products.
1)Why did Google originally decline to use Netlist’s product, and instead order products from MetaRAM?
From what we know now from court dockets – GOOG has an internal hardware group which wanted to MANUFACTURE memory modules. They discussed with various people (including NLST) about manufacturing memory modules according to GOOG specs and components. At that time NLST may have revealed the stuff they were able to offer (or may have been in process of doing – since NLST had that lull while they transitioned to China factory). In either case GOOG may have felt NLST unable to deliver at that time – plus GOOG may have wanted to do it themselves (given they had their own team inside GOOG).
Eventually they wound up using other suppliers.
This by itself does not reflect badly on NLST. What it does reveal however is that GOOG was far more (complicit) than an innocuous buyer for memory from MetaRAM or other (as I was assuming earlier). Thus a direct infringer.
2)Why did MetaRAM declare bankruptcy, and are they planning to emerge from bankruptcy and continue as a private, limited company? If so, what will their business be?
They had the support of INTC and others (basically supplying the buffer chip – like Inphi is wanting to do now). Now MetaRAM claims (in court dockets) that they only sold like $37K worth of goods (?) and “destroyed” the rest – so they aren’t infringing NLST stuff (!).
Inphi is doing similar as MetaRAM (except they only make the buffer chip – while MetaRAM had buffer chip plus ability to create memory module). However as pointed out above, MetaRAM may have used “stacking” and such means which NLST looks askance on – because of it’s asymmetric heat dissipation and line lengths (asymmetric delay on lines).
3)Google is claiming that Netlist’s patents are “invalidâ€. In what way? What evidence or reasoning supports this argument?
This is standard boilerplate language for anyone first response to any patent claim – you can see it in all the patent cases.
You will note GOOG “rushed” to court on NLST “letter”. This is because GOOG probably saw no (simple) answer to NLST claims in that letter – it would inevitably lead to complex arguments. So GOOG chose to take it to court (in GOOG vs. NLST). That court case wound up costing GOOG – they had to turn over a GOOG server to NLST – which resulted in discovery of “Mode C” usage and data for NLST. NLST already had counterclaims in GOOG vs. NLST, but they probably were waiting for additional data from this discovery – which they used in NLST vs. GOOG (which is more recent).
Another advantage for GOOG in going to court is it establishes an orderly method to deal with this “threat”. Since it affects the health of GOOG’s entire server infrastructure (since a typical GOOG server is using “Mode C” which is a smoking gun for “4-rank” usage), it was an essential asset to protect. Now in court proceedings, GOOG has the luxury of doing things in an orderly manner – no tension – if they are weak they settle and pay in an orderly way without any threat to GOOG’s structure. Plus they have option to do a buy deal with NLST (if NLST HyperCloud is that superior).
Circumstantial evidence suggests, GOOG purchase of MetaRAM’s assets is a ploy to gain SOME leverage. However as you have seen the MetaRAM cases have been voluntarily retracted by both NLST/MetaRAM – so this may affect GOOG adversely in that those cases don’t help it much in discovery or issues against NLST.
MetaRAM has significant IP – however it is IP in “stacked” modules and stuff which may or may not overlap NLST. Plus NLST has earlier (March 2004 antecedents) in the relevant patents.
Note also NLST position is significantly different from a year ago – at that time, even if GOOG wanted they could not have done a deal with NLST (as NLST was still going through transition to chinese factory and move off commodity memory into these high margin products).
4)Reportedly, neither Google nor Inphi are seeking monetary damages from Netlist, but Netlist is seeking monetary damages from Google and Inphi. Does this fact suggest that Netlist has the stronger cases against Google and Inphi?
NLST IS seeking damages, treble damages (for wilful violation etc.). This by itself doesn’t mean they have a “stronger” case.
The reason GOOG hasn’t claimed damages is that the tone of GOOG vs. NLST is to “please protect us from NLST” – as stated above it is basically a structured arena where GOOG can safely deal with this problem in a controlled way – i.e. if it works out good if not pay.
The reason Inphi hasn’t claimed damages, is that they have a (some would say) frivolous suit (retaliatory). Secondly they have not been damaged by NLST yet. In any case Inphi is a component maker which is not exactly focused on this niche and it’s IP is weak in this area.
On a related note, John Smolka (former Inphi employee) joined NLST recently (from SEC filing on awarding of options).
5)When is it likely that Netlist’s new product “Hypercloud†will complete trials by OEMs, be approved and certified, be ordered in great volumes, and start generating significant earnings for Netlist?
Someone else may have better insight into this.
6)How might Netlist be negatively affected by adverse judgments in the two court cases, and by the JEDEC committee’s impending decision on memory product standards?
JEDEC committee is probably conflicted, because their standard conflicts with NLST. This means MU and others will not be using Inphi buffer chips. So basically alternative to Hypercloud is on ice until JEDEC decides how to proceed.
NLST will be negatively affected if it “loses” the court cases – which is unlikely given NLST’s strong position in this area – i.e. second to none. If there is overlapping IP – then there is a settlement. In any case, there are no real “competitors” left in this area. MetaRAM was the only one who was seriously specialized in this area (and a supplier of memory buffers), plus it has some IP. Inphi does not come close. GOOG is a serious player, but it too has weak IP in this area (only the MetaRAM IP they just bought). Plus specifically in “4-rank” (i.e. “fooling” the processor/memory controller into thinking there is less memory than really is – is specifically a NLST patent having antecedents to March 2004). Plus there is a history of leakage – from Texas Instruments leakage to JEDEC committee, to MetaRAM, to GOOG discussions with NLST prior to making their own memory that fits into a “story”. Bill Gervasi – inventor of 4-rank while at NLST was later head of JEDEC committee – so there is probably some promiscuous employment (given such a small niche area).
7)If Google loses or settles the case with Netlist, is Google likely to become a paying customer of Netlist?
It is unlikely GOOG will “lose” the case – that would mean shutting down the GOOG network. It’s not like GOOG can’t pay any price that is required – so more likely is GOOG will eventually settle – either for cash sum, but more likely (to escape black eye of “do no evil” motto violation) they would opt for some “neutral” thing like overpaying for NLST memory or something. Or if GOOG is confident in own manufacturing (some have suggested their inhouse hardware division is not exactly all that great) they may license then.
Of course such a decision would have devastating consequences on the JEDEC FBDIMM “Mode C” proposed standard.
GOOG would probably like there to be a standard – for better pricing (since it is a big consumer of memory).
So one option (best for GOOG) would be some arrangement where NLST IP is allowed by NLST to become JEDEC standard – in return for something or other (i.e. shades of RMBS).
netlist,
Thank you very much for your fast and detailed reply. I’m glad you’re on this thread. All the best.
8)If MetaRAM is planning to emerge from bankruptcy and continue as a private company, why would they sell their many patents to Google (and be left with no IP) unless their patents do in fact infringe on Netlist’s patents, and are more of a liability than an asset going forward?
Unlikely that MetaRAM would emerge from bankruptcy – usually companies go into bankruptcy to shed debt. In many cases the management can continue (if resurrected) under new owners. In MetaRAM’s case the management WERE the owners. So it is unlikely to emerge AS MetaRAM.
However it lives on as GOOG-owned MetaRAM IP. Which GOOG will probably use to bolster it’s position against NLST, and possibly for future dealings with companies (since patents tend to get used as currency as well – if sued, countersue with patents other may be infringing – given the state of excessive issuance of patents in overlapping areas).
After sale of IP to GOOG, MetaRAM assets are further reduced, so “MetaRAM” of old probably will not emerge.
Think now of GOOG as the new “MetaRAM”.
9)If MetaRAM’s many patents do infringe on Netlist’s patents, why did Google quickly buy them all from a bankrupt MetaRAM? If MetaRAM’s patents infringe on Netlist’s patents, they should be useless to Google as a legal defense in the court case with Netlist, as bargaining leverage with Netlist, and as a basis for Google or its contractors to manufacture memory products as an alternative to, and competitor against, Netlist’s memory module solution for servers.
Well having those MetaRAM patents (on the cheap) is probably better than appearing in court without pants on.
Since Netlist sued MetaRAM over MetaRAM’s patents allegedly infringing on Netlist’s patents, Google must know that Netlist will sue Google if Google ever tries to use MetaRAM’s patents to manufacture memory products.
GOOG is not trying to “win” the case with the MetaRAM patents – it is just slightly “better” to have them. That is, can perhaps get away with less violation issues, or pressurize NLST on other fronts as nuisance.
However, note that GOOG situation is not symmetric with NLST. GOOG is an existing violator – so is in for some “damages”. Also if there is threat of treble damages. Not that the money will be of great concern to GOOG with it’s billions – but still as lawyers, GOOG attorneys will seek to limit damage to GOOG and avoid jury trial at the last minute.
Language is typical for such cases:
..Google’s infringing activities in the United States and this District include it’s use of 4-Rank FBDIMMs in it’s server computers and contributing to and/or inducing others to make, use, sell, and/or offer for sale such 4-Rank FBDIMMs, and/or components thereof which lack any substantive non-infringing use.
..Google’s infringement of the ‘912 patent is wilful and deliberate ..
..Netlist be awarded damages adequate to compensate Netlist for ..
..That the court award treble damages to Netlist for the unlawful practices described in this Complaint.
..That the court render judgement declaring this to be an exceptional case.
netlist,
Thanks again for your most recent post. I learned a lot of good info from it.
Overall, it seems that Netlist is in the best position. In contrast, Google and Inphi (and MetaRAM and Texas Instruments) seem to have engaged in questionable conduct, but Netlist has not, apparently.
And the fact that Netlist has a brand-new, potentially “breakthrough” product in an important niche of the emerging cloud-computing, and at a time when there are no real or strong competitors, suggests that Netlist should prosper significantly in 2010 — after a few months of resolving the current conflicts.
And since Netlist required 1-2 years and tens of millions of dollars to develop Hypercloud, it is unlikely that any serious competition to Hypercloud will appear for at least a year.
Like you, I have been thinking about Google’s founding principle and solemn commitment to “do no evil”. They appear to have violated their own values in their dealings with Netlist, so it will be interesting to see if Google redeems itself by compensating Netlist properly — eventually.
netlist,
Thanks also for your fast and detailed reply to questions 8 and 9.
I got a good laugh from your witty line about Google buying MetaRAM’s patents to avoid “appearing in court without pants on.”
I agree with you that Google’s recent purchase of patents by MetaRAM may help Google a little, but most of Google’s alleged misconduct occurred while it did not own MetaRAM’s patents. As you know, Google’s current ownership of MetaRAM’s patents will not give Google “retroactive” protection against Google’s alleged misconduct when it did not own MetaRAM’s patents.
So I also agree with you that Google appears to have knowingly and willfully infringed against Netlist’s legal rights — and therefore will eventually have to compensate Netlist to some degree. The extent of damages to Netlist and of compensation by Google is what the court will determine.
Also, the court will realize that Google did not invent anything related to MetaRAM’s patents, was not the original filer or owner of the patents, and only recently rushed to purchase MetaRAM’s entire list of patents to try to protect itself from its prior misconduct and current legal liabilities.
Under these circumstances, I doubt that the court is going to look very favorably on Google’s belatedly acquired, “second-hand” patents.
By the way, I also agree with your earlier doubts about the suspicious claims by MetaRAM that it sold only $37K worth of product and destroyed the rest of production (why was that, hmm?), and therefore, committed little or no infringement against Netlist.
Aside from wanting justice for Netlist (in court and in the market), I will be interested to eventually learn convincing explanations for many of the “mysteries” in this story.
Two corrections to my recent remarks:
1)It’s my understanding that MetaRAM claimed that it destroyed all of the products worth $37,000 that it sold, but which were never used commercially by the buyer (thought to be Google).
2)There is at least one serious competitor to Netlist’s “Hypercloud” product: Cisco. But Netlist’s product attaches directly to the memory, while Cisco’s product attaches to the motherboard. Apparently this difference gives Netlist’s product an advantage over Cisco’s product in performance.
If NLST is going after GOOG for “4-rank” usage, this is a JEDEC standard and plenty of other memory makers make 4-rank memory modules.
Or is it in some specific way that those do not infringe – or is it that they ALL infringe, it’s just that NLST has chosen the fight with GOOG (being most prominent and best player to get early resolution out of in court).
If so, that could mean JEDEC usage of NLST IP, and other memory module makers would have to fall in line if GOOG concedes ?
netlist,
If and when you can, would you please explain what you think the positive and negative effects on NLST would be if JEDEC adopts Netlist’s IP and Hypercloud technology as the JEDEC standard?
Thanks.
If and when you can, would you please explain what you think the positive and negative effects on NLST would be if JEDEC adopts Netlist’s IP and Hypercloud technology as the JEDEC standard?
I don’t know how something being a “standard” relates to something being “proprietary”. On the face of it JEDEC being a standards body just specifies a common way of doing something – and is a middle player to do that for a disparate and competitive group of companies.
It may not have anything to do with whether it is proprietary or not. Generally JEDEC would want to standardize on something that does NOT do something proprietary (to minimize costs of going with that standard).
In fact RMBS (Rambus) was accused of being part of early negotiations in standards setting process for DRAM and using that prior knowledge to patent stuff ahead of the standard process – in effect STRENGTHENING it’s hold on what would eventually become the standard. Essentially a way of herding competing manufacturers into a corner (having committed to a certain way of manufacturing) so it could squeeze out royalty payments later. In effect harming the whole purpose of the standards setting body – to make things easier (and cheaper) for the industry.
http://en.wikipedia.org/wiki/Rambus
July 30, 2007, the European Commission launched antitrust investigations against Rambus, taking the view that Rambus engaged in intentional deceptive conduct in the context of the standard-setting process, for example by not disclosing the existence of the patents which it later claimed were relevant to the adopted standard. This type of behaviour is known as a “patent ambush”.
Given this context it seems reasonable that JEDEC would wait before it finalizes a proposed standard – unless of course it is assured that the license fees related to that standard are going to be “reasonable” (by NLST).
From above link:
February 5, 2007, U.S. Federal Trade Commission issued a ruling that limits maximum royalties that Rambus may demand from manufacturers of dynamic random access memory (DRAM), which was set to 0.5% for DDR SDRAM for 3 years from the date the Commission’s Order is issued and then going to 0; while SDRAM’s maximum royalty was set to 0.25%. The Commission claimed that halving the DDR SDRAM rate for SDRAM would reflect the fact that while DDR SDRAM utilizes four of the relevant Rambus technologies, SDRAM uses only two. In addition to collecting fees for DRAM chips, Rambus will also be able to receive 0.5% and 1.0% royalties for SDRAM and DDR SDRAM memory controllers or other non-memory chip components respectively.
This would suggest the JEDEC CAN wind up in a position where they are pushing a standard which is heavily tied to one company’s IP – resulting in allied royalty payments.
So on the face of it – no, it would not harm NLST if JEDEC adopts NLST-related technology as a standard. In fact it would HELP NLST – since it would herd more folks into doing things that infringe NLST IP – thereby increasing the potential royalty collection by NLST in the future (once IP issues are resolved in court).
JEDEC NOT adopting NLST-related stuff as standard doesn’t help NLST – since it means the industry is doing something that is unrelated (and thus un-royalty-collectable by NLST).
netlist,
Thank you for your thoughtful, thorough reply, as is your style.
netlist,
thank you for for clarifying this complex issue. you seem to have an excellent grasp of technology as well as legal issues. I have a question that you might be able to shed some light on. Since netlist has to buy RAM on open market, how can you be competitive compared to DRAM manufactureres such as Elpida and Micron ?
Since netlist has to buy RAM on open market, how can you be competitive compared to DRAM manufactureres such as Elpida and Micron ?
I don’t know enough about this to comment – but it seems NLST is in similar situation as other memory module makers. Which include those that do not make their own memory chips:
STEC – Simple Tech
SMOD – Smart Modular
Also it seems these memory makers themselves can be buyers of NLST-like tech. For example Hynix (one of major memory chip makers) had licensed MetaRAM:
http://lynnesblog.telemuse.net/292
Feb 25, 2008
MetaRAM Busts RAMBUS Stranglehold?
Snake oil or salvation from former AMD CTO,
By Lynne Jolitz
…
MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it?
http://www.digitimes.com/news/a20080820PR200.html
Hynix demonstrates DDR3 R-DIMM using MetaRAM technology at IDF
Press release, August 20; Esther Lam,
DIGITIMES [Wednesday 20 August 2008]
…
Intel will demonstrate the world’s first 16GB 2-rank DIMM from Hynix, using the MetaRAM DDR3 chipset at IDF. Intel will also demonstrate a server with 160GB using Hynix DDR3 R-DIMMs and Meta SDRAM technology, Hynix said.
So memory chip makers also DO deals with companies like NLST – in order to build more complicated modules (that include more than just memory chips).
Here is a list of memory chip manufacturers:
http://www.interfacebus.com/memory.html
This article lists the dominant memory chip makers (not the same as memory module makers):
http://news.cnet.com/8301-13924_3-10057284-64.html
October 3, 2008 4:00 AM PDT
Memory chipmakers face survival test
by Brooke Crothers
Hynix – in financial trouble due to extended drought in memory during last 2 years (low prices, low margins). However it is linked to S.Korean government and can get bailout.
Samsung
Qimonda AG (Infineon) – “ailing”
MU – “largest U.S. maker of memory”
question:
Since netlist has to buy RAM on open market, how can you be competitive compared to DRAM manufactureres such as Elpida and Micron ?
So short answer is that companies like NLST have to buy memory chips from those companies, but if those companies want to make memory modules they have to license from companies like NLST – or buy buffer chips (like they were planning to do from Inphi, and earlier MetaRAM).
From the chart they show, you can see that the companies which license their technology (i.e. their IP – intellectual property) are the ones with greatest gross margins.
http://seekingalpha.com/article/16968-gross-margin-kings-memory-chip-manufacturers
Gross Margin Kings – Memory Chip Manufacturers
by: Robert Zenilman September 15, 2006 | about: CY / SNDK / MU / SFUN / RMBS / IDTI / ISSI / MOSY / RMTR / SSTI / STEC / STAK
Rob Zenilman submits: Within a specific sector, gross margins can differ dramatically, due to the different nature of their businesses. Among the memory chip manufacturers tracked here, gross margins ranged from 24.8% (STEC) up to 85.7% (RMBS). The companies that have drastically higher gross margin are what I like to call “gross margin kings”.
What separates out the companies with the four highest gross margin rates is that they earn money by licensing out their technology. 88% of Rambus’ revenue is from licensing, 100% for MoSys (MOSY), 56% for Saifun Semiconductors (SFUN), 100% for Virage Logic (VIRL) and 25% for Staktek Holdings (STAK).
…
However, having high gross margins is no guarantee of profitability. Of the four companies here with gross margins over 70% (and that derive most of their revenue from licensing) – only Rambus has a positive P/E of 62.84.
Can anyone here provide any details about the kind of information that NLST CEO Hong will probably discuss in his “investor presentation”? Thanks.
Netlist to Present at the Needham Growth Stock Conference in New York City
IRVINE, Calif., Jan. 6 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that CEO C.K. Hong is scheduled to make an investor presentation at the Needham 12th Annual Growth Stock Conference on Thursday, January 14, at 2:30 pm Eastern Time. The conference is being held January 12-14, at The New York Palace in New York City.
The presentation will be accessible by live webcast in the Investors section of the Netlist website at http://www.netlist.com. A replay of the webcast will be available on the Netlist website for 30 days.
Another twist that makes things even more interesting here.
According to the USPTO Assignment database, MetaRAM has licensed the use of the method in patent 7,472,220 to Netlist, and Netlist has licensed the use of the method in patent 7,289,386 to MetaRAM.
Memory module decoder
Interface circuit system and method for performing power management operations utilizing power management signals
It appears that the execution date on the conveyances was December 21, 2009, and the recording of the assignments took place on January 4, 2009.
The ‘386 patent appears to be at the heart of some of the litigation between Google and Netlist, and between Netlist and MetaRAM. Part of a settlement between Netlist and MetaRAM? I don’t know for certain. Might be interesting to listen in to the live webcast that netlistfan mentioned in the comment above this one.
This is seriously interesting. Thanks !!
USPTO assignment search page – entering patent number to search:
http://assignments.uspto.gov/assignments/?db=pat
Reveals that:
7472220 – MetaRAM license to NLST ..
http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7472220&pub=&asnr=&asnri=&asne=&asnei=&asns=
7289386 – NLST license to MetaRAM ..
http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7289386&pub=&asnr=&asnri=&asne=&asnei=&asns=
So a cross licensing arrangment – and this fits in with recent withdrawal of cases by both parties in NLST vs. MetaRAM and MetaRAM vs. NLST (as reported above).
The USPTO assignment info for each patent shows:
Conveyance: LICENSE (SEE DOCUMENT FOR DETAILS).
Compare with the patents that were sold to GOOG (as reported above) – for example:
7580312 – Power saving system and method for use with a plurality of memory circuits (
http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7580312&pub=&asnr=&asnri=&asne=&asnei=&asns=
These have:
Conveyance: ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).
That is, the “ownership” (ASSIGNORS INTEREST) is transferred to GOOG.
In an earlier post I had wondered HOW NLST allowed MetaRAM to sell it’s IP – since NLST was potentially due money (in case of eventual win in court against MetaRAM).
Now it seems something similar to that DID happen – i.e. either:
– NLST signalled to MetaRAM to keep certain IP in hand (while it could sell other stuff it was not interested in – like the IP on “stacked memory” which NLST has claimed has serious asymmetry issues – search above for “stacked”).
– MetaRAM recognized which of it’s IP would be valuable in eventually getting NLST off it’s back, and retained THOSE patents.
The date of assignment for patents sold to GOOG are 09/11/2009:
Assignor: METARAM, INC. Exec Dt: 09/11/2009
While the ones licensed to NLST are dated 12/21/2009:
Assignor: METARAM, INC. Exec Dt: 12/21/2009
So MetaRAM KNEW as early as 09/11/2009 that it would not need THOSE patents – and which ones NOT to sell to GOOG (!).
The other two alternatives are left unanswered:
– why NLST did not insist that all of MetaRAM’s IP be given (or sold) to NLST – maybe NLST wasn’t interested in all of it ?
– why GOOG didn’t overpay to buy ALL of MetaRAM’s IP – including the patents that MetaRAM retained, or did MetaRAM decline since it needed those to fend off NLST for an eventual settlement, so it’s bankruptcy proceedings could proceed unhindered.
– what happens to the patents MetaRAM has retained (not sold to GOOG) – like the cross-licensing patents. Can NLST claim interest in who gets the patents in bankruptcy proceedings since it is (now) a licensee ?
As well as this question:
– does MetaRAM hold other patents that it has NOT sold to GOOG ? Would be hard to believe GOOG would not want the most NLST-specific ones but were there MetaRAM patents that GOOG did not buy – which MetaRAM is still holding on to ? Why – since there is little value in retaining those patents – as a company, those assets will have to be liquidated during bankruptcy.
As conjectured above – the NLST/MetaRAM mutually agreed dismissal of cases – NLST vs. MetaRAM and MetaRAM vs. NLST bode well for the strategy the NLST lawyers were adopting. One of conciliation with a defeated enemy in order to positiong better for the fight against the larger one:
– since not much extractable from a bankrupt company, NLST can at least make sure info from discovery etc. in these cases is not available to help GOOG in the NLST/GOOG cases.
Now it seems NLST DID get something from that settlement as well – broader coverage thanks to help from MetaRAM patents.
searching the USPTO assignment search page – entering METARAM as “Assignor”, then clicking the “METARAM, INC” name that appears:
http://assignments.uspto.gov/assignments/q?db=pat&asnrd=METARAM,%20INC.
shows the patents that MetaRAM has assigned to others.
http://assignments.uspto.gov/assignments/q?db=pat&asned=NETLIST,%20INC.
The patents that assigned to NLST. Only the 7472220 patent appears for Netlist.
MetaRAM patents being transferred to GOOG number around 50 + 7 (patents or filings).
http://assignments.uspto.gov/assignments/q?db=pat&asned=GOOGLE%20INC.&page=15
quote:
So MetaRAM KNEW as early as 09/11/2009 that it would not need THOSE patents – and which ones NOT to sell to GOOG (!).
Another possibility is that MetaRAM sold off it’s IP without too much thought – but because they were being sued by NLST, and they were in turn retaliatory-suing NLST based on 7472220 patent, they HAD to retain that. So everything else went on sale, but they had to keep that in hand in order to retain some standing in court case against NLST (which was their counterweight to NLST’s suit against them).
When MetaRAM/NLST settled, this patent was lying around, so it became part of the eventual settlement – i.e. cross-licensing between the two.
So maybe this is the (simpler) interpretation.
Question is, why did MetaRAM license the NLST patent then ? Or is it standard procedure to cross-license this way – or is this standard “closure” to the case by making each party “whole” by giving them the license to patent which nullifies the case (so for instance the same type of suit cannot be filed again – either by NLST against MetaRAM or by MetaRAM against NLST) – and has nothing to do with whether MetaRAM intends to use the NLST patent (probably not).
Bill,
Thanks for making and sharing your latest discovery. Very interesting indeed.
And netlist,
Thanks for building on Bill’s discovery by sharing your related discoveries and by thinking through the implications and possibilities.
Great detective work, you two. The mystery slowly unfolds…
Thanks, netlist and netlistfan,
I’m very thankful for the comments and questions and information being shared here by everyone.
I’m still wondering how the licensing of technology to MetaRAM might affect the litigation between Netlist and Google, if at all.
from Briefing.com this evening:
“4:49PM NetList files for $30 mln mixed securities shelf offering”
NLST closing price today was $5.21. Now it’s $4.82 after-hours. Ouch!
My guess is that NLST will trade in a range between $4.50 and $6.50 for 3 to 6 months, and won’t rise steadily or significantly until the lawsuits are settled, the OEMs test and approve HyperCloud, the latter gets certified, and JEDEC decides the standardization question.
Over the next 1 to 2 years, I think NLST and HyperCloud will prosper nicely. But tonight’s share-dilution (on top of the many other obstacles to NLST that I just mentioned) will probably suppress the share price for several months.
Other opinions?
http://www.netlist.com/investors/SEC_filings.htm
The above link will take you to netlist.com, where you can download NLST’s S-3 filing dated today, 1-11-2010, in the format that you prefer. It confirms that NLST has filed with the SEC it’s plan to sell $30 million in mixed securities in a “shelf offering,” which may be sold over an unspecified duration.
For an explanation of the “dilution”, please read the following on the NLST yahoo board (poison-pill provisions possibility), since with 10M shares, a possible hostile takeover by GOOG would not be out of the question:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=11805&mid=11887&tof=2&frt=2#11887
Re: OFFERING, SELL SELL SELL .. part1
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=11805&mid=11888&tof=1&frt=2#11888
Re: OFFERING, SELL SELL SELL .. part1
netlist,
Your “poison pill” hypothesis seems very plausible to me too. Netlist is especially wary of Google right now, because of Netlist’s lawsuits with Google, but Netlist is also probably wary of being vulnerable to a “premature” buy-out or hostile take-over from HP, Dell, IBM, Cisco, or any number of bigger companies. And almost everyone is bigger and richer than Netlist.
Since Netlist’s new product, HyperCloud, has real potential to be a blockbuster that could take Netlist’s share price to fantastic heights (and since Netlist has worked hard and long to remake itself, and now hopes to finally achieve the potential that they never enjoyed since their IPO in 2006), I think that Hong and Netlist’s other big inside owners must really want Netlist to have a chance to succeed and grow on its own, and not have its independent life “taken away” prematurely in what is really its infancy. A premature buy-out or take-over would be emotionally painful to Netlist management and employees, I think.
In addition to the emotional aspect, there is the financial aspect: Hong et al obviously would prefer to sell their shares in a few years at $500, not now at $5!!!
So now seems like a perfect time to obtain the legal right to sell up to $30 million worth of securities, in the manner and timing of Netlist’s choosing. Why? First and most important, for self-defense against Google (and others), as you rightly point out. And second, because Netlist’s share price has been range-bound (and likely will continue to be) until the obstacles I mentioned a few comments above get resolved. This S-3 filing would cause a price drop at almost any time, so it’s best get it over now, when the price is likely to be trading sideways for another 3 to 6 months anyway. Then, when Netlist has removed the obstacles that are currently in its way, and the orders come in and the payments come in, Netlist will likely be clear for take off.
Lastly, I just want to highlight that Netlist’s S-3 filing is not an offer to sell shares, and it’s not an obligation to sell shares — it will be (once approved by the SEC) just an option to act that sits on the “shelf,” waiting for Netlist to use if and when Netlist needs or wants to use it.
Thanks again, netlist and netlistfan,
There does seem to be the potential for Netlist to grow into a remarkable company, but they also have to appear to be an attractive target at this point. If the filing can help them, then it sounds like a good move to take. I wasn’t sure what kind of reactions this post might have when I first posted it, but I didn’t expect the mystery to start unraveling the way that it has. Thanks again, for keeping this post up to date with the latest news.
Today’s call was very depressing. I have accumulated quite a bit and was hoping to hear about OEM
qualifications in today’s Needham talk. All I heard was 6 months to revenue and no concrete OEM
announcement nor any talk of lawsuit settlement.
Netlist and others – Can you help me understand why it takes so long for qualification ? Also are
the lawsuits preventing quick adoption of hypercloud ?
Seems to have become hype-o cloud from hypercloud !
joeq,
I share your discouraged feeling and your financial pain. I agree with you that the NLST Investor Presentation was very disappointing. So much so, that I sold all my shares of NLST, at a painful-but-bearable loss, so that I could “let go and move on.” I hope to make the loss back elsewhere.
Like many others, I jumped too soon and too much into NLST because of all the great descriptions of its new product, HyperCloud, in November. But after watching a lot of my money drop and drift for two long months (while other stocks are rising), the thought of having my money falling or flat for another 6 months (or more) prompted me to sell, and switch to other stocks.
I still think NLST and HyperCloud have great potential IF everything works out well. But will it? And when?
Here is a “top 10” list of my concerns regarding NLST:
1) Is HyperCloud truly the huge technological advance that the “hype” has claimed it is?
2) Does HyperCloud work exactly as promised, or will tests by OEMs require adjustments and delays?
3) How long will it take for OEMs to finish testing HyperCloud, and will they approve it? (Netlist Investor Relations doesn’t know.)
4) How long will it take for HyperCloud to receive full certification? (IR doesn’t know.)
5) How long will it take for OEM’s to place big orders and for NLST to start mass production? (IR doesn’t know.)
6) How long will it take for NLST to start receiving big sales and big payments? (NLST “thinks” 6 months, but based on all of the unknowns, I think 6 months is just a guess, and it could take longer.)
7) How long will it take for the lawsuits between NLST and GOOG, and NLST and Inphi, to be resolved (and will NLST win, or benefit from, these lawsuits)? Nobody knows.
8) To BILL: how will the above lawsuits be affected by a)GOOG’s buying of MetaRAMS’ patents, and b)MetaRAM’s and NLST’s cross-licensing of patents? (IR doesn’t know.)
9) What are NLST’s plans for their recent $30 million S-3 “shelf” filing, and do these plans include protection against a possible hostile takeover (netlist’s “poison pill” idea) or premature takeover (my idea)? (IR doesn’t know.)
10) Will NLST be able to become a successful company on its own (after struggling for 3 years since its IPO), or will NLST get bought out and merge into a much larger corporation? No one knows.
I want to emphasize that these are my concerns and understandings regarding NLST. Anyone is free to call Netlist Investor Relations’ Ms. Jill Bertotti at (949)474-4300, or to email her at jill@allencaron.com, and ask her your own questions.
The are no doubt additional unknowns and concerns regarding NLST — but these 10 alone seem likely to make an investment in NLST take 6 months or longer to significantly pay off.
For example, if the economic recovery in the U.S. and the world is slower than expected, or suffers a serious setback, tech spending on products like HyperCloud will likely be lower and slower.
I still wish NLST (and NLST investors) all the best, and I will watch to see if it eventually takes off (in price and performance), but I won’t buy it again unless and until it proves itself to be growing quickly and steadily.
BILL, thanks for starting this very helpful thread, and for your great discoveries and comments.
netlist, thanks for your especially useful information, prompt replies and thorough comments, many links, and thinking through of implications.
And thanks to everyone who commented and contributed to this thread.
joeq, I hope my reply helps. Maybe others can also answer your questions. I wish you the best.
Best regards everyone! Maybe I’ll see you later. Hope you have a healthy, happy, and prosperous new year!
oops! On number 9 of the above list, I meant to type “premature buy-out” not “premature takeover”.
Thanks for the smile, Bill. :>)
Just 5 more (I promise) :>)
11) When will JEDEC decide whether or not to adopt NLST’s IP and HyperCloud as the industry standard — and how will NLST be affected either way?
12) How well will NLST compete against much bigger and richer competitors (like CSCO)?
13) Are tiny NLST’s production capacities too small to keep up with a potentially huge demand by giant companies like HP, DELL, and IBM?
14) Does HyperCloud truly have a competitive edge over other products, and if so, how long and how much will it be profitable for NLST?
15) How long will it take before technological innovations by other companies advance ahead of NLST’s HyperCloud?
OK, I’m done. Good luck all!
Thanks, joeq and netlistfan.
Great questions, and a lot to think about. The issues involving netlist, metaram, and Google here arethe type that affect many tech companies. Can the small startup survive to become a large one? How can innovation in technology, market pressures, standards bodies, and a need for that innovative technology influence potentially shape our futures?
I’m not sure in this particular instance, between these particular parties. I’m not sure that I’ve seen Google purchase patents from another company before in what might be characterized as a defensive maneuver, if that is what in fact took place. That’s why I wrote about it in the first place. The cross-licensing of patent processes between Netlist and MetaRAM was a surprise as well.
I don’t have any stocks from any of the companies involved, but I think there are some pretty large implications behind what happens between the companies involved for large scale data centers, and search providers like Google. I’ll be following along, and very thankful for all of the sharing of information within the comments on this thread.
I hope you all have a wonderful new year as well. Thanks, again. 🙂
Note GOOG does not have the patent that MetaRAM was suing NLST with.
So the MOST overlapping patent that MetaRAM could think of is now licensed by NLST.
Even if GOOG were to license NLST patents now, it would not undo years of infringement (and treble damages if wilful).
quote:
Today’s call was very depressing. I have accumulated quite a bit and was hoping to hear about OEM
qualifications in today’s Needham talk. All I heard was 6 months to revenue and no concrete OEM
announcement nor any talk of lawsuit settlement.
Netlist and others – Can you help me understand why it takes so long for qualification ? Also are
the lawsuits preventing quick adoption of hypercloud ?
Seems to have become hype-o cloud from hypercloud !
NLST yahoo board:
http://messages.yahoo.com/?action=q&board=nlst
Many people have said on that board that OEM qualification does take time – maybe others can shed light on whether 3-6 months is normal.
Here is an overview of Needham presentation:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=12087&mid=12087&tof=1&frt=2#12087
netlist provided lots of good info in the last link to the NLST Yahoo Board that he posted just above. Scrolling far down on that Yahoo Board thread, the following opinion by “herbieray20…” that NLST might hit $2 or $3 and take a full year to take off seemed worth posting here.
netlist rightly replied that NLST’s share price might jump earlier than that, and by a lot, if, for example, the lawsuits settle sooner and favorably.
Lots of scenarios are possible, but most current opinions state that it will most likely take 6 to 12 months for NLST to take off in a sustained way.
Here’s the quote to consider:
“This [Netlist’s HyperCloud] will take at least a year to bear fruit, with lots of ups and downs in the stock price.
As a retired engineer in the server area, I have lots of experience with product development cycles, evaluation of chips sets, etc. I would be very surprised if this product has tangible effects on revenues/profits for at least 4 quarters, and then who knows what the competition will have done..?
My prediction for the next year is consolidation between $2 and $3 at best.
Just my opinion.”
[by herbieray20… on NLST Yahoo Board, 1-16-10]
NLST has answered Inphi’s complaint in Inphi vs. NLST.
Among the usual boilerplate, one comment sticks out. NLST claims that Inphi cannot claim “injunctive relief”, because Inphi is subject to a “compulsory reasonable and non-discriminatory (RAND) license requirement pursuant to Inphi’s membership in JEDEC and activities therein in connection with these patents”.
Netlist,
Based on your statement,
Among the usual boilerplate, one comment sticks out. NLST claims that Inphi cannot claim “injunctive reliefâ€, because Inphi is subject to a “compulsory reasonable and non-discriminatory (RAND) license requirement pursuant to Inphi’s membership in JEDEC and activities therein in connection with these patentsâ€.
Does this mean that Netlist is infringing Inphi patents in its product and trying to use JEDEC as shield ? Clever move by netlist. Wonder if they infringe other JEDEC patents.
NLST sued Inphi (NLST vs. Inphi) and later Inphi retaliated with Inphi vs. NLST.
Inphi is lacking any serious IP in the area. Their retaliatory suit has variously been reported as “frivolous”.
I posted that to support that general impression that Inphi’s suit is a kneejerk suit crafted without thought.
Netlist today announced that the United States Patent and Trademark Office issued to Netlist Patent No. 7,636,274 for its invention related to memory load isolation and memory rank multiplication, and Patent No. 7,619,912 for its invention related to memory rank multiplication.
Hi McDee,
Thanks for citing those. From what I understand, they were announced because they have something to do with Netlist’s HyperCloud memory modules.
I didn’t do a rundown of Netlist patents here, but I looked through a number of them. These weren’t patents that were just published, but they are fairly recent. The newest of the two was granted in December. The press release, from January 19th, tells us:
In GOOG vs. NLST (now consolidated with NLST vs. GOOG at request of both GOOG and NLST), GOOG sacks whole legal team of Fish and Richardson.
From filing dated Jan 21, 2010:
Please take notice that plaintiff GOOGLE INC., hereby substitutes Timothy T. Scott, Geoffrey M. Ezgar, and Leo Spooner III of the law firm of King & Spalding LLP as attorneys of record in the place and stead of David J. Miclean, Howard G. Pollack, Jason W. Wolff, Juanita
R. Brooks, Robert J. Kent, Jr. and Shelley K. Mack of the law firm of Fish & Richardson, located at 12390 El Camino Real, San Diego, CA 92130 and 500 Arguello Street, Suite 500,
Redwood City, CA 94063.
What is interesting is that the new law firm King & Spalding is NOT KNOWN for patent or intellectual property litigation.
That is, they are not known for being “trial lawyers” or “intellectual property” lawyers, but are considered #2 in country for arbitration (yes, ARBITRATION) !
If you look at their practices:
http://www.kslaw.com/portal/server.pt?space=KSPublicRedirect&control=KSPublicRedirect&CommunityId=227&ui_pa_sort=group&ui_pa_display=
They surely DO have practice (like all large firms) in:
– Licensing
– Patents
– Trade Secrets & Non-Compete Litigation
– Mergers & Acquisitions
HOWEVER, they are not a small tight outfit that just deals with “intellectual property” or patent defence.
If you have all the money in the world (GOOG) to protect yourself in an IP-related lawsuit, you would get the best lawyers for that (if you were intending to contest on IP grounds).
However if you were thinking of getting the best deal – you would get the best company in arbitration.
In terms of rankings they are ranked VERY HIGH in arbitration, but are not even MENTIONED in rankings for patent or intellectual property litigation:
http://www.kslaw.com/portal/server.pt?space=KSPublicRedirect&control=KSPublicRedirect&PressReleaseId=3375
King & Spalding Lawyers Earn 26 Rankings As Leaders In Their Fields and 18 Practice Areas Recognized In Chambers Global 2009
04 Mar 2009
Historically not exactly famous for patent litigation either:
http://en.wikipedia.org/wiki/King_&_Spalding
Notable Mandates
* Counseled Sprint Corp. in its sale of Sprint Publishing & Advertising, the directory publishing business to RH Donnelly Corp. for $2.23 billion. The transaction was announced in 2002 and closed in 2003.
* Represented JDN Realty Corp. in its $1.02 billion sale to Developers Diversified Realty Corp. for a combination of cash and stock. The deal closed in 2003.
* Advised Credit Suisse First Boston as financial adviser to Graphic Packaging in its $3 billion merger with fellow forestry and paper company Riverwood Holding in 2003.
* Represented Caremark Rx in its $6 billion merger with AdvancePCS in 2004.
* Counseled Lockheed Martin in its $2.4 billion acquisition of Titan Corp., in a mixed cash and stock offer which closed in 2004.
* Advised SunTrust Bank in its $6.98 billion purchase of National Commerce Financial Corporation in 2004.
* Legal counsel to Novelis, a Canadian-based aluminum company in its purchase by Hindalco Industries Ltd., an Indian steel company for total consideration of $6 billion. The transaction closed in 2007.
Link for #2 ranking for arbitration:
http://www.kslaw.com/portal/server.pt?space=KSPublicRedirect&control=KSPublicRedirect&PressReleaseId=3491
King & Spalding Earns No. 2 Spot in 2009 Arbitration Scorecard
26 Jun 2009
NEW YORK, June 26, 2009—King & Spalding, a leading international law firm, earned the No. 2 spot in Focus Europe’s 2009 Arbitration Scorecard, a worldwide ranking of law firms by number and size of arbitrations. The rankings were published in the summer 2009 issue of Focus Europe, an annual supplement to The American Lawyer.
Focus Europe noted that King & Spalding is among “the first tier of arbitration law firms.” The firm appeared as arbitration counsel in a total of 25 arbitrations included in the 2009 Arbitration Scorecard.
King & Spalding was also included in Focus Europe’s list of Twelve Big Awards for its representation of three of the listed awards: Azurix Corp. v. Argentine Republic ($165 million), Sempra Energy International Co. v. Argentine Republic ($128 million) and Enron Creditors Recovery Corp. and Ponderosa Assets, LP v. Argentine Republic ($106 million).
The 2009 Arbitration Scorecard covers international arbitrations (not limited to Europe) that were active in the years 2007 and 2008. It is based on nearly 250 cases—all either commercial disputes with stakes of at least $500 million or treaty disputes with stakes of at least $100 million.
Among the survey’s list of disputes, King & Spalding served as claimant’s counsel in one investment treaty arbitration and three contract arbitrations in which at least $1 billion was in controversy.
King & Spalding is ranked among the leading international arbitration practices in the world. Chambers USA 2009 says, “This powerhouse continues to impress with its international arbitration practice, attracting praise for its depth of knowledge and client service,” an accolade that echoes from the publication’s 2008 edition, which described the firm as “currently one of the arbitration arena’s biggest success stories.” King & Spalding was nominated for a Chambers USA Award for Excellence 2009 in international arbitration and was a finalist in 2008. It also features among the world’s leading international arbitration practices in Chambers Global 2009. And the 2009 edition of The Legal 500: US describes King & Spalding’s international arbitration team as “”simply terrific.”
About King & Spalding
King & Spalding is an international law firm with more than 880 lawyers in Abu Dhabi, Atlanta, Austin, Charlotte, Dubai, Frankfurt, Houston, London, New York, Riyadh (affiliated office), San Francisco, Silicon Valley and Washington, D.C. The firm represents half of the Fortune 100 and in Corporate Counsel surveys consistently has been among the top firms representing Fortune 250 companies. For additional information, visit http://www.kslaw.com/.
From an interview of GOOG’s NEW new lead lawyer (Timothy Scott):
http://apps.kslaw.com/Library/publication/Zimmer%20Scott%20Met%20Corp%20Counsel%20Jan%202010.%20pdf.pdf
Top Litigators Manage Firm’s California Of?ces
Page 32 The Metropolitan Corporate Counsel January 2010
The Editor interviews Timothy T. Scott
and Donald F. “Fritz†Zimmer, Jr.,
King & Spalding LLP.
…
Editor: To what extent has the cost of e-discovery contributed to the increase in litigation expense?
Scott: You can’t even litigate a simple thing without the discovery cost dwarf- ing everything else in the case. If a com- plaint in a securities class action case survives a motion to dismiss, the cost of collecting and reviewing all the elec- tronically stored data creates an impetus to settle the case before even getting to the merits in order to avoid the cost of e- discovery.
Zimmer: The invention of email has done more to bene?t plaintiffs’ counsel than any other development of the last 20 years. I have colleagues on the plain- tiffs’ side of the bar who tell me they thank their lucky stars that email was invented.
…
Not only has GOOG suffered from discovery (on hardware side) – by having to reveal it’s GOOG server to NLST (and thus proving use of “Mode C” in GOOG servers).
It will now have to contend with NLST riffling through GOOG e-mails as well – as the trail is examined of who said what at GOOG and when they knew it.
The trial will examine the role of GOOG employees (mentioned in earlier court dockets and posted some days back – see above):
Rick Roy – “involved in the development of the accused 4-rank FBDIMMs and who participated in meetings with Netlist concerning it’s patented technology”
Andrew Dorsey – same as above
Rob Sprinkle – same as above
And god knows what else THAT “discovery” of GOOG internal e-mails will reveal.
The situation is strongly in favor of GOOG settling the case.
– for reasons mentioned above i.e. legal issues and “discovery” problems for GOOG (a loss will also not help their “do no evil” image – and image is essential for GOOG i.e. consumer trust since that is part of the GOOG business model.
– for reasons that alternatives to NLST are at a standstill.
Alternatives to NLST – there are none so far.
MetaRAM is out of business (NLST now licensee of patent MetaRAM hoped to use against NLST).
Inphi which owns no IP in this area and was just hoping to sell a buffer chip is embroiled in legal dispute with NLST.
Meanwhile memory module makers like Micron are waiting for JEDEC to arrive at a standard so they can start moving forward. Inphi is also awaiting that, so memory module makers will use it’s buffer chip (now that MetaRAM – who was earlier partnered with many memory module makers – is gone).
But while NLST/GOOG dispute (being the most prominent) is not resolved, and the licensing status of the infringing of IP in JEDEC proposed standard (like JEDEC FBDIMM “Mode C” proposed standard) are not clear, JEDEC cannot move forward with standardizing (since that will benefit NLST as IT’S IP is made into standard so many people can start doing that – meaning more infringers and people to collect damages from by NLST).
In any case JEDEC procedure is to see that they not infringe proprietary stuff – and if it does to negotiate licenses itself (or by it’s members) to allow the standard to move forward. After all the creation of “the standard” is to encourage standardization – which will lead to lower overall costs to it’s members. JEDEC cannot blindly adopt something as standard that still has IP and licensing issues unresolved.
For this reason – we will see a DROUGHT of memory in this space. NLST being the only unencumbered player – both as creator of the memory, and the manufacturer will be in an unenviable position – as there is no other player who can deliver what NLST can deliver.
Plus it is not like NLST HyperCloud is a totally new form factor – it is plug and play and requires no modification to BIOS. This means it is a “no brainer” for an OEM server manufacturer to incorporate NLST HyperCloud since nothing else is available and there is no “cost” to doing this (i.e. “how can we lose”).
In addition, all this is timed to coincide with the much reported server upgrade cycle (since there is a lot of pent up demand as there were fewer upgrades/purchases in last 2 years due to economic uncertainty and the upgrade cycle is now beginning – memory price improvement etc.).
And you have OEMs in a crunch – they cannot avoid using NLST.
Meanwhile memory module makers will be getting impatient. As they will miss the upgrade cycle (at least in this area of data center upgrade/cloud computing expansion). They will be under pressure to negotiate some licensing deals with NLST.
Note that while many memory module makers have done deals with MetaRAM in the past, they have NOT been prosecuted by NLST (partly to limit it’s legal expenses perhaps – and partly because these people are all potential customers).
GOOG also will perhaps also be under the most pressure – with ever expanding hardware needs (GOOG being a big user of memory-loaded systems – for which the NLST HyperCloud solution is most appropriate) GOOG will be in a crunch as well, if it cannot upgrade it’s systems for lack of non-infringing solutions.
In addition, note that GOOG – for the possibility of wilful infringement (since GOOG had discussed with NLST – then went ahead and violated NLST IP), could face treble damages in court (if case goes through).
So pressure is on GOOG – to settle. But because memory/server expansion is such a big part of it’s business, GOOG loses every month that it delays – every month that standard/legal memory modules are NOT available to sate the growth nees of GOOG server expansion.
So the time clock is clicking for most of these players, and that makes the GOOG vs. NLST/NLST vs. GOOG cases unlike a traditional IP infringement suit – since there are time issues as well which are NOT in favour of GOOG.
Hi Netlist,
Thanks for the updates and observations on the legal representation in the Google vs. Netlist litigation. There do seem to be some factors involved that point more towards a settlement than prolonged litigation. I guess we wait and see.
Netlist,
Brilliant analysis. Maybe there will be some money after all. Any thoughts on Google settlement in terms of
dollars ? How much can we expect ? Do you think Inphy will also settle and any guess on how many dollars
can we expect out of them ?
I do not know what the difference is between GOOG using 4-rank (for which “Mode C” is a smoking gun) and the other memory module makers who are making 4-rank memory. Whether they are violating NLST IP as well.
It is possible that they are – except that NLST has chosen to not fight them right now – and has gone against GOOG first (low legal resources and also that the memory module makers could be allies later).
quote:
Maybe there will be some money after all. Any thoughts on Google settlement in terms of dollars ? How much can we expect ? Do you think Inphy will also settle and any guess on how many dollars
Trying to pin down the knowns – and keeping in mind the constraints i.e. like what we know of GOOG psychology, their business model and how they hope to behave to retain customer trust etc. ..
My guess is as part of settlement, GOOG will want no attribution of guilt for starters (to avoid pollution of “do no evil” motto). To achieve that they will be willing to concede in other areas i.e. monetarily.
GOOG can pay and walk away. But situation is not that simple – there is a reason it was infringing NLST IP – this is exactly what GOOG needs for it’s servers.
NLST HyperCloud is designed precisely for GOOG type situations (i.e. increases speed for memory-loaded servers – apart from the cost and power advantages).
So GOOG has to make sure that it can negotiate a path for itself as well (so GOOG servers are not shut down). So maybe the carrot will be a contract for use, or licensing terms to protect existing GOOG usage.
Because of the constraints above, there will have to be a transition from acrimonious to congenial. GOOG knows it can’t just walk away from NLST even after throwing money at it – it will have to buy memory or license from NLST in the future even if they were not personally in litigation.
Therefore I suspect a change in attitude at GOOG – the change in law firm already changes the faces that NLST lawyers meet – thereby allowing discussion in a different direction (as I posted above, the new GOOG law firm is #2 in country for ARBITRATION, and not particularly famous for IP litigation).
The effect of that will be multiplicative for NLST – concession from GOOG will be validating for NLST. And GOOG may understand that it has that value just by acknowledging validity of NLST IP – is a signal to other players to fall in line (if GOOG the gorilla is acknowleding NLST IP validity).
I would not expect GOOG to take a share in the company – since insiders maybe careful at this stage. Plus they may need to remain neutral in order to be a trusted supplier to whole range of consumers (which include many cloud computing competitors of GOOG).
Regarding Inphi – maybe they will settle for a small payment. They probably haven’t sold that many buffer chips (which would only have been used if JEDEC finaled the standard). So maybe there won’t be any great damages.
Don’t see any real synergies between Inphi/NLST, so maybe a simple cash payment or a slap on the wrist.
Inphi holds no IP in this area, yet was trying to step into MetaRAM’s shoes after that company went bankrupt. MetaRAM was the darling of Intel and other memory module makers – who were using their buffer chips. Now Inphi was hoping to do the same (except without any IP) – mainly banking on JEDEC/module-makers to deal with the IP issues. However NLST didn’t go after those, but went directly for Inphi for IP infringement.
Argument for early GOOG/NLST settlement:
GOOG also will understand certain inherent advantages with an early settlement.
However superficially one would not expect the settlement to occur much before the 3-6 months for OEM qualification (Needham conference audio) since GOOG knows NLST will not be manufacturing in volume until then – so no hurry.
On the other hand, there maybe a whole process for internal qualification at GOOG which does a lot of custom solutions within GOOG – and they may want to “join the program” earlier so they can also give feedback to NLST (along with the other OEMs like HP, DELL).
This type of thinking would suggest a much earlier settlement, where early resolution is beneficial to GOOG, rather than delaying settlement (achieves nothing – have to still pay, and are in worse negotiating position, and are behind in OEM qualification roadmap).
In any case, GOOG founders may be of the opinion that to “keep it simple” – i.e. if it IS decided that they have to settle eventually, then to settle EARLY (and remove the distraction), and instead use the time to forge new relationship with NLST and to get in early with qualification of the new memory.
If this reading is correct, we may see a settlement far earlier than the 3-6 OEM qualification period.
Some comments on eventual JEDEC/NLST negotiations:
JEDEC is waiting for legal clarification, since it’s proposed standard falls awry of NLST IP. Since JEDEC standard is meant to make things easier for manufacturers, they would require favorable licensing terms from NLST before they could finalize the standard (and advocate it to manufacturers).
Since GOOG is the bigger player (and the decision is influential for others), I doubt NLST would bother dealing with a JEDEC deal before the GOOG deal.
After GOOG/NLST resolution, we may see JEDEC negotiating for reasonable terms of licensing with NLST.
Since NLST memory is plug and play and requires no BIOS updates, there is LESS need by OEMs for JEDEC standardization for this. In fact GOOG and others will not need JEDEC approval to start using NLST memory. This would have been different if it required changes to BIOS, motherboard – in that case there would be a need to have some standardization about how those changes should be made.
But as a consumer of memory for memory-loaded servers (where NLST HyperCloud works best), GOOG would WANT NLST IP to be licensed by JEDEC/module-makers so there are many manufacturers and prices go down on this technology. Of course, this would be the (JEDEC/RMBS-like) “royalty-based” model that CEO Hong mentioned in the Needham conference audio:
quote:
we have strong IP which create competitive barriers as well as provide future avenues for a royalty based business model
Since all these matters ARE interrelated – for instance GOOG settlement with NLST suddenly puts NLST in a strong spot – knowing this, GOOG may try to combine the GOOG settlement with JEDEC-licensing negotiations. While radical, this would be the sort of thing GOOG could do. Gives it some street cred, plus it is beneficial to GOOG in long run which is an avid consumer of memory for servers.
Allied NLST IP like “embedded passives”:
GOOG/NLST settlement raises other questions – what will become of the 4-rank stuff that memory module makers have been making for some time. Is that all a violation of NLST IP as well ? Were a lot of those 4-rank modules sold before ? Settlement would involve forgiving or getting compensation for all the other NLST IP that has been used by others.
However, so far NLST has been careful to avoid litigating too many cases – the seem to have gone after MetaRAM, GOOG and Inphi – i.e. the core players making the memory, or influential in what happens.
If JEDEC were to license NLST IP to JEDEC/memory-module-makers, they would probably need to license more than just the core IP, since to do it as well as NLST they may require the allied IP like “embedded passives” (to free up space on memory modules).
Summary:
So in summary, given previous post comments about time-sensitive nature for GOOG, which has ever expading server/memory-use growth, we may see a settlement far earlier than the “one day before jury trial” scenario.
The trend by infringers to drag out cases to settle a day before jury trial (to deplete accuser’s resources) is thus not applicable here.
And the time-sensitive nature includes not just wanting to use the memory, but also to join early so it can participate in qualification and feedback for NLST HyperCloud i.e. be part of the process early on if they ARE going to be using that memory anyway.
And then possibly also to mobilize JEDEC/NLST licensing so future multiple sources of such memory are available for GOOG.
Regarding Inphi, I don’t think there will be any of the “complicated relationship” issues (as between NLST/GOOG) since NLST probably doesn’t expect to be manufacturing anything through Inphi.
Inphi is not just a buffer chip manufacturer – so the case won’t harm them too much.
However the actual damages retreivable from Inphi may not be huge since they haven’t really sold this buffer chip that much (i.e. still only at the announcement level). Although they WERE prepping to replace MetaRAM as buffer chip of choice for memory module makers (like Micron etc.).
Which I think is why Google started the litigation against Netlist with their declaratory relief action.
They do seem like ideal clients for Netlist, with Hypercloud memory. Even though they are adversaries in court at this point in time, there is the potential for them to do business together in the future.
Thanks for the detailed analysis, netlist.
I do still find myself puzzled by Google’s purchase of MetaRAM’s patents, and what they might do with them in the future.
quote:
I do still find myself puzzled by Google’s purchase of MetaRAM’s patents, and what they might do with them in the future.
As an NLST shareholder, I would be happy to see GOOG transfer that IP to NLST. Although much of it may not be valuable (like IP on stacked memory which NLST has criticized for asymmetrical data lines etc.).
A related question is what GOOG intends to do with the internal hardware division – or at least the sub-section that was involved with development of the “internal” (don’t know who actually manufactured that for GOOG) infringing memory modules that GOOG is using.
If GOOG intends to keep that division they may need some IP like MetaRAMs for the future (at least to mount “retaliatory lawsuits”).
Hi netlist,
I suspect that Google wants to maintain their own independent ability to develop and manufacture hardware for their own internal uses. It is possible that’s the reason why the acquisition of the patents, but it was still a surprise to see. I’m not sure that they would transfer the MetaRAM IP over to Netlist, but I’ll try to keep my eyes open in case it happens.
Yes, makes sense. Although GOOG’s efforts for hardware are hard to gauge. We don’t even know how GOOG made those memory modules, and in what number, or if MetaRAM was involved with that (can’t be if MetaRAM says only made $37,000 worth and destroyed them at that).
http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/
How Google Works
By David F. Carr
2006-07-06
quote:
Google runs on hundreds of thousands of servers—by one estimate, in excess of 450,000—racked up in thousands of clusters in dozens of data centers around the world.
And this is from 2006. But as the other paper you posted above:
DRAM Errors in the Wild: A Large-Scale Field Study (pdf)
http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
suggests they may have variety of hardware perhaps.
http://en.wikipedia.org/wiki/Google_platform
Current hardware
Servers are commodity-class x86 PCs running customized versions of Linux. The goal is to purchase CPU generations that offer the best performance per dollar, not absolute performance.[7] Estimates of the power required for over 450,000 servers range upwards of 20 megawatts, which cost on the order of US$2 million per month in electricity charges. The combined processing power of these servers might reach from 20 to 100 petaflops.
Here’s an article on a GOOG server:
http://news.cnet.com/8301-1001_3-10209580-92.html
April 1, 2009 2:26 PM PDT
Google uncloaks once-secret server
by Stephen Shankland
This article suggests GOOG builds a battery into it’s server – and may thus avoid separate UPS costs (i.e. can tolerate interruption before a generator is started).
The idea of using a battery is what many may have thought of – except GOOG has done it (because there is a critical mass of such people there – as soon as someone proposed it, there would be many who would immediately warm up to the idea – as opposed to a more conventional company).
As the article states – the loss of efficiency in conversion is important – as it directly impacts the heating that has to be managed with air conditioning etc. then.
They have also simplified the have motherboard (or possibly the power supply as the comments suggest) to do the 12V to 5V conversion (also required by motherboards from power supplies usually) – and this simplifies the use of the single voltage i.e. 12V battery.
quote:
The Google server was 3.5 inches thick–2U, or 2 rack units, in data center parlance. It had two processors, two hard drives, and eight memory slots mounted on a motherboard built by Gigabyte. Google uses x86 processors from both AMD and Intel, Jai said, and Google uses the battery design on its network equipment, too.
The comments suggest the motherboard is:
http://www.gigabyte.com.tw/Products/Networking/Products_Spec.aspx?ProductID=1075&ProductName=GA-9IVDT
Which on the face of it would only support up to 12GB of DDR2 400MHz memory.
One thing to note though – they DO have all the memory slots in use – though that would make sense from an economic standpoint i.e. get least dense memory module (cheapest) and populate all the slots.
GOOG infrastructure is designed for fault tolerance and motherboard failure – it is possibly they are also designed for server variation. If so, there is no real indication that new servers being installed are not using more memory than this server that was revealed.
After all the server GOOG showed in discovery for GOOG vs. NLST WAS infringing NLST IP (using “Mode C” and by implication 4-rank memory). It could be since GOOG had chosen to not deny that they were using “Mode C”, they thus chose to reveal a server demonstrating that as well – to simplify the process of eventual settlement and arbitration.
One reason GOOG could use the 4-rank memory despite not having systems that are heavily memory loaded (if 8-12GB still runs at top speed) could be reduced power consumption and ability to use cheaper memory chips. However would the manufacturing of such custom memory not be expensive as well (compared to a mass producer of such memory ?).
The article is dated April 1, 2009 – the author confirms that it was not an April Fool’s article.
Hi netlist,
Informative articles, especially the CNET one from April 1st.
There is an Exaflop patent (Exaflop shares the same address as Google on its patent submissions), Data center uninterruptible power distribution architecture, which includes the use of a 12 Volt lead acid battery in the event of power failure.
The patent looks like it might describe an earlier generation of Google’s use of a 12 volt battery for each server. A number of the many granted patents and patent applications assigned to Exaflop mention the use of a 12 volt battery.
Google also has a few granted and pending patent filings on motherboard cooling systems, a modular data center, and other data center approaches (including a water-based data center).
But I haven’t seen any published patent filings from them (other than the MetaRAM assigned ones) that focus upon memory.
NLST officially recognizes settlement with MetaRAM.
I wonder why the delay – is it because they have to wait for the final approval by court to appear ?
The other alternative – that NLST is savvy about holding back on news and posting it (like the previous 2 patents) at a time when stock is being manipulated down by market markers etc. If so, that would be interesting – and the opposite of what some companies wind up doing – i.e. screwing shareholders. With insiders owning 50% plus of NLST, that is perhaps one of the advantages – that management is better aligned with shareholder interest.
Stock price movements may not harm stocks in the long run, but they scare out many shareholders – leading to shareholder churn and (at least on stock bulletin boards) an absence of long time holders. So in that sense at least it helps if a company’s stock price does not move up/down that much (or manipulated down by market makers during a lull period).
http://finance.yahoo.com/news/Netlist-Announces-Settlement-prnews-1484777084.html?x=0&.v=1
Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
Press Release Source: Netlist, Inc. On Thursday January 28, 2010, 1:25 pm EST
Hi Netlist,
It’s quite possible that they waited because they wanted to get legal filings out of the way, and a final settlement order from the two Courts involved. Making an announcement in a timely fashion after legal requirements were fulfilled would make it less likely to be perceived that they were announcing news in an effort to manipulate stock prices.
GOOG’s attorneys King & Spalding add some IP litigation attorneys to the team:
01/27/2010 90 MOTION for leave to appear in Pro Hac Vice Mark H. Francis ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)
01/27/2010 91 MOTION for leave to appear in Pro Hac Vice for Daniel Miller ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)
01/27/2010 92 MOTION for leave to appear in Pro Hac Vice for Scott T. Weingaertner ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)
01/27/2010 93 MOTION for leave to appear in Pro Hac Vice for Susan Kim ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)
01/27/2010 94 MOTION for leave to appear in Pro Hac Vice for Allison Altersohn ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)
It seems Scott Weingaertner is the significant attorney with expertise in “employee trade secret misappropriation”:
http://www.marketwire.com/press-release/King-Spaldings-Growth-Continues-in-New-York-760126.htm
SOURCE: King & Spalding
Aug 13, 2007 12:02 ET
King & Spalding’s Growth Continues in New York
Weingaertner focuses on intellectual property litigation and counseling with particular experience handling disputes regarding patent infringement, licenses and employee trade secret misappropriation, as well as patent interferences and ex parte procedures before the U.S. Patent and Trademark Office. He is well versed in the technology areas of semiconductors and other electronics, computer software, medical and other mechanical devices, and financial services. He earned S.B. and S.M. degrees from the Massachusetts Institute of Technology, and a J.D. from the University of Pennsylvania.
GOOG attorneys King & Spalding probably needed some IP attorneys – that is understandable.
The original reading still stands – that if King & Spalding is unranked for IP litigation, but #2 for arbitration – it seems likely that it was the #2 part which brought them to attention of GOOG.
This because Fish & Richardson (which they dumped) is already a respected law firm for IP litigation.
In any case, GOOG may not have liked the direction in which things were going – or the previous prosecution pattern of previous attorneys, possibly moving to a new tack (with new faces).
This article gives the general sense of the situation – Fish & Richardson was ideal for IP litigation, while King & Spalding for “general matters”.
Fish & Richardson (GOOG’s previous attorneys) is consistently rated among top 2 in overall as well as “patent prosecution”. While King & Spalding is #13 in “overall category”, and not even listed in top 30 for “patent prosecution”:
http://www.law.com/jsp/iplawandbusiness/PubArticleIPLB.jsp?id=1202437741766
or
http://www.slwip.com/about/whats_new/documents/2010Top10PatentProsecution.pdf
The Guardians
Which law firms do the country’s biggest corporations turn to when they need help obtaining, asserting, and defending their valuable intellectual property?
By Erik Sherman
IP Law & Business
December 01, 2009
…
The Big List
On the surface, there is a lot of consistency in how widely the top companies spread their work. Consider that the 36 firms included in the overall ranking that, along with our patent prosecution and IP litigation rankings, appears here were mentioned by companies at least five times for doing either prosecution or litigation work. But only five firms—Baker Botts, Fish & Richardson, Foley & Lardner, K&L Gates, and Greenberg Traurig—got enough mentions to also qualify for spots on our prosecution and litigation lists.
…
With two exceptions, no firm got more than three mentions from companies in a single industry. The exceptions: Baker Botts and Fish & Richardson (a finalist in this year’s IP Litigation Department of the Year contest; see “Perfecting the Art of War.” ). Both were named by multiple high-tech and/or telecommunications companies.
…
Fish did litigation and prosecution work for Apple Inc., H-P, and Intel Corporation, and litigation for Microsoft Corporation.
By contrast, the top firm with the most diverse docket was King & Spalding, whose seven mentions came from seven different clients, each of them in a different industry. For example, the firm did litigation work for The Coca-Cola Company (beverage), Chevron Corporation (energy), and International Business Machines Corporation (technology), and prosecution for General Electric Company (diversified financials), The Procter & Gamble Company (household and personal products), Citigroup Inc. (financial services), and Costco Wholesale Corporation (retail).
…
With eight and six mentions, respectively, two of the top litigation firms—Fish & Richardson and IP Litigation Department of the Year winner Quinn Emanuel Urquhart Oliver & Hedges (see “What Rhymes with Win?” ) had four clients between the tech and telecom sectors. Compare that to Wilmer and King & Spalding, with four mentions spread across four different industries. When it comes to litigation, high-tech companies and telecoms stand out, with top industry players using 15 out of the 18 firms to rack up at least four mentions. Given that, between them, these companies account for only 12 percent of the 100 biggest companies, the fact that they hired so many top litigation firms is certainly noteworthy. Is it any wonder that technology companies—frequent targets of so-called patent troll infringement claims—have been a driving force in the push to reform the nation’s patent system?
…
The Prosecution List
While it may not be as lucrative as litigation, patent prosecution work can be plentiful. Consider that in 2008, Fortune 100 corporations collectively received well over 21,000 patents, according to figures from the Intellectual Property Owners Association and the Patent and Trademark Office.
So who’s doing the bulk of that work? Thirty firms earned at least four mentions. At the top of the list, there is little overlap with the top litigation shops. Only three firms—Baker Botts, Fish & Richardson, and K&L Gates—climbed into the top four on both lists.
http://www.law.com/jsp/iplawandbusiness/PubArticleIPLB.jsp?id=1202437199242
The IP Litigation Department of the Year
IP Law & Business
December 01, 2009
Winner: Quinn Emanuel What Rhymes with Win?
Finalist: Fish & Richardson Perfecting the Art of War
Finalist: Weil, Gotshal & Manges Tried and True
Finalist: Winston & Strawn The Net Effect
http://www.fr.com/news/2010/january/americanlawyer.pdf
The Fish docket is mostly defense cases, but
the firm can flex its enforcement muscles. Case
in point: Fish helped Callaway Golf Company
win an injunction blocking the sale of Acushnet
Company’s Titleist Pro V1, which generated
$1.9 billion in sales in 2008. While that win
was sent back for a retrial due to a technical is-
sue, Callaway GC Michael rider says he has no
qualms about hiring Fish to handle all his pat-
ent litigation: “They know the patent law abso-
lutely cold, and know how to try patent cases.â€
http://www.fr.com/news/2010/january/FishIPLaw360.pdf
Law360, New York (January 01, 2010)
Fish & Richardson PC
Fish & Richardson earned top spot in Law360’s IP firm rankings for its success in
reversing over $700 million in damages awards against Microsoft Corp. and in forging
new law concerning the fraud standard in trademark disputes.
http://www.fr.com/news/articles.cfm?topicid=13
Recent Wins
Anyone know where the Google Caffeine project page is now ?
Originally announced at:
http://www2.sandbox.google.com/
Or has it been integrated already (i.e. working in some random data center as originally anticipated).
Hi Netlist,
The search at that address was retired a few months ago, and Matt Cutts announced on his blog in early November to Expect Caffeine after the holidays. In that post, Matt mentioned that they would be showing Caffeine results at one data center so that they could continue to test it.
From what I have heard, Caffeine results were being shown for roughly half the visitors to the data center at IP address 209.85.225.103. It’s quite poassible that Google has rolled out Caffeine results to more data centers at this point, but we can’t be sure for certain.
In re: the above article that this thread is under – no one thinks it’s too much of a coincidence that GOOG announces this Caffine Project exactly one week after NLST announces their HyperCloud? NLST comes out with something seemingly revolutionary in memory and cloud computing, and a week later GOOG announces that they’re upgrading their infrastructure code and doing an overhaul of their browser to make it faster? Something that would require a memory upgrade?
And, as of this writing, the NASDAQ is up big, and NLST is down on very low volume. They driving it down with 100 share trades, and then buy 10 & 15K share blocks once they get it down.
NLST is being manipulated like there’s no tomorrow.
quote:
no one thinks it’s too much of a coincidence that GOOG announces this Caffine Project exactly one week after NLST announces their HyperCloud? NLST comes out with something seemingly revolutionary in memory and cloud computing, and a week later GOOG announces that they’re upgrading their infrastructure code and doing an overhaul of their browser to make it faster? Something that would require a memory upgrade?
Although having more memory in servers would allow GOOG to do things on a different scale, the suggestion in media or GOOG info seems to suggests an improvement in the algorithms and stuff like that. Any improvement in the hardware is not explicitly mentioned it seems.
GOOG was using the infringing memory prior to GOOG announcement. If anything Caffeine may have been based on that memory. Thus it would have little to do with NLST’s announcement schedule.
Recall that GOOG had gone to court to prevent NLST seeking to shut down GOOG servers – that case is GOOG vs. NLST.
During discovery for GOOG vs. NLST, GOOG was forced to show a GOOG server to NLST which had “Mode C” (smoking gun for “4-rank”). That led to NLST filing case NLST vs. GOOG.
As reported above, both GOOG and NLST asked court to consolidate the two cases because they are dealing with same memory.
Now it seems (Feb 3, 2010) Judge Armstrong has DENIED the consolidation. She is saying the GOOG vs. NLST case is well on it’s way (with discovery on track), so why delay that.
And to let the cases go ahead separately.
What does this mean.
It means the GOOG vs. NLST (which is at an advanced stage) will not be delayed.
Note that both parties were keen to consolidate the cases.
Here is what she says:
quote:
Based on that
commonality, the parties request that the Court: (1) consolidate the cases for trial under
Federal Rule of Civil Procedure 42(a); (2) vacate the pretrial schedule and trial date in the First
Case in order to coordinate both cases for trial; and (3) schedule a date for a Case Management
Conference to set a new pretrial schedule applicable to both cases.
The Court is not convinced that the parties’ requests to consolidate and vacate the
pretrial schedule and trial date in the First Action are either necessary or appropriate. The First
Case based on Netlist’s decision to file a
new action over a year after the First Case was filed, particularly given that the new action
purportedly involves the same memory modules at issue in the First Case.
The Court also has serious concerns regarding the potential for the instant litigation to
expand exponentially, thereby increasing the cost to the parties and consuming an inordinate
amount of judicial resources. Although a settlement conference has been scheduled in the First
Case for August 3, 2010, the Court believes that it is in the parties’ mutual interest to engage in
a settlement conference or mediation, sooner rather than later—before the parties have
expended what likely will be a considerable amount of time and resources litigating these two
cases.
I’m pretty confident NLST is going to make another run again soon. I’m not some delusional buy and hold long who tells himself whatever he needs to hear while he keeps losing money.
I bought the stock @ 2.46 on that 1st Friday during the run up. I could have bought much cheaper, but it wasn’t until I did the DD that I realized the implications of what they had come up with. I sold the following Monday a little under 6, and then bought back in @ the close on Tuesday @ 4.03. Sold it at the end of the week @ 6.76. Played it a few more times on momentum and spikes here and there. But when they got it down to 3.34 the other day, I had to buy back in. And I bought back in deep. Got me 30K shares @ 3.43/3.42/.
Watching it trade the last few days, it’s really obvious that the crooks are walking it down on super low volume.
In the meantime, I really wish NLST would offer a little forward guidance.
I love this page you guys are well informed and very knowledgeable…
Can you tell me where you review the court docs for these case’s?
Thank you much.
Never mind. I found it…
Cheers!
quote:
Can you tell me where you review the court docs for these case’s?
For completeness here is the link:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=11572&mid=13025&tof=1&frt=2#13025
Re: update on the various court cases
…
By the way, anyone wanting to look for court cases can do so at:
http://pacer.uspci.uscourts.gov/
You need to register, but only need to pay after dues reach a certain amount (can use credit card to pay).
Click on “Enter U.S. Party/Case Index”.
Click on “All Court Types”
search for netlist:
Party Name: netlist
The cases will be listed (though with cryptic ids) – here is a guide:
NLST vs. Inphi:
4 NETLIST INC. cacdce 2:2009cv06900 09/22/2009 830
GOOG vs. NLST:
10 NETLIST, INC. candce 4:2008cv04144 08/29/2008 830
NLST vs. GOOG:
13 NETLIST, INC. candce 4:2009cv05718 12/04/2009 830
Inphi vs. NLST:
14 NETLIST, INC. cacdce 2:2009cv08749 11/30/2009 830
Clicking on the ID will show a page – you can view in HTML (webpage) or as pdf. View in HTML for now.
Click on “Docket Report”.
This will show what’s going on – and will have links for the individual dockets (judge’s ruling, filings by NLST/GOOG etc.).
Another patent application assigned to Google was published at the end of January:
Methods and Apparatus of Stacking DRAMS
From the patent filing:
Netlist’s ‘386 patent looks like it was filed on July 1, 2005, which was a couple of months earlier than the provisional patent application.
Not sure if any of this has any impact or significance for any pending litigation, and there is the possibility that there might be additional unpublished patent filings as well, but thought it was worth mentioning.
Another patent application assigned to Google was published at the end of January:
Methods and Apparatus of Stacking DRAMS
Thanks.
Netlist’s ‘386 patent looks like it was filed on July 1, 2005, which was a couple of months earlier than the provisional patent application.
NLST claims their IP dates back to March 2004 (from court filings).
Yes, this seems to be a MetaRAM patent that may have been in process (continuation of earlier patent).
It says it is a continuation of this patent:
http://www.freepatentsonline.com/7599205.pdf
Which itself is a continuation of:
http://www.freepatentsonline.com/7379316.pdf
This is a long-standing patent thread at MetaRAM – for “stacking DRAMs”.
Since GOOG is now owner of original thread, and all derivative patents, we see GOOG as direct owner. Note that the lawyer is Fish & Richardson (GOOG’s lawyer). Don’t know if Suresh Rajan is now a GOOG employee – but would make sense if MetaRAM main inventors are brought into GOOG hardware division.
This is related to the “stacking DRAMs” stuff that MetaRAM was doing and which as I pointed out earlier NLST was critical of for it’s asymmetrical lines to memory chips (i.e. asymmetric delays along lines).
As posted above:
Compare NLST to MetaRAM (now bankrupt) design:
http://www.ansoft.com/ie/Track2/DDR3%20Memory%20Module%20Design.pdf
It shows MetaRAM was to deliver 16GB 2Rank R-DIMMs in Dec 2008 at slower 1066 MT/s speed than the 1333 MT/s for the 8GB (and slower than 1333 MT/s for the NLST 16GB HyperCloud).
You can also see the problems with MetaRAM design – layout of chips is asymmetrical, and height increases considerably for the 16GB. It has the Hynix label on it.
You can see the “discrete decoupling capacitor” (compare to “embedded passives” with NLST IP).
And compare with NLST comments (also from earlier post above):
http://www.netlist.com/technology/technology.html
While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency. Stacking components results in unequal cooling of devices, causing one device to run slower than the other in the stack. This often results in module failures in high-density applications.
The density limitation is solved by proprietary board designs that use embedded passives to free up board real estate, permitting the assembly of more memory components on the substrate. The performance of the memory module is enhanced by fine-tuning the board design to minimize signal reflections, noise, and clock skews.
Sorry for incorrect use of HTML tags. What tag do you use to indent ?
Hi netlist,
No apologies necessary. All your efforts towards making this thread become as informative as it is are truly appreciated.
The html element “blockquote” can be used to indent, like in my comment above.
Netlist,
I am confused. I see the following part on netlist website
NMD2G7G3510BH-D85 16GB 1066MHz 2Rx4 16GB x4 4Gb DDP Planar LP
Does DDP mean staked(?) devices ? Why is Netlist selling staked devices and not using
their own proprietary technology ?
Netlist,
BTW, if staked technology is being used by Netlist then the metaram
patent will apply ? any ideas on how metaram patent might limit netlist ?
quote:
I am confused. I see the following part on netlist website
NMD2G7G3510BH-D85 16GB 1066MHz 2Rx4 16GB x4 4Gb DDP Planar LP
Does DDP mean staked(?) devices ? Why is Netlist selling staked devices and not using
their own proprietary technology ?
How do you presume it is “stacked DRAM” ?
NLST explicitly disparages stacked DRAM use by “other companies”:
MetaRAM had other IP (including “stacked DRAMs”) which it DID NOT use against NLST. What does that suggest ?
Instead it was one patent 7472220 that was used in
retaliatory suit against NLST:
http://www.freepatentsonline.com/7472220.pdf
As posted above:
7472220 – MetaRAM license to NLST ..
http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7472220&pub=&asnr=&asnri=&asne=&asnei=&asns=
That patent is now licensed to NLST as part of settlement (and any buyer – GOOG or other – of this IP from MetaRAM will not be able to use it against NLST).
From the PR at time of NLST/MetaRAM settlement:
http://finance.yahoo.com/news/Netlist-Announces-Settlement-prnews-1484777084.html?x=0&.v=1
Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
Press Release Source: Netlist, Inc. On Thursday January 28, 2010, 1:25 pm EST
…
A provision in the settlement protects Netlist if another company purchases MetaRAM’s patent and attempts to seek action against Netlist in the future.
quote:
I am confused. I see the following part on netlist website
NMD2G7G3510BH-D85 16GB 1066MHz 2Rx4 16GB x4 4Gb DDP Planar LP
Does DDP mean staked(?) devices ? Why is Netlist selling staked devices and not using
their own proprietary technology ?
“4Gb DDP” seems to be some type of memory as can also be seen here:
http://www.intel.com/technology/memory/ddr/valid/ddr2_800_sodimm_results.htm
M470T5267AZ3-CE7 Samsung K4T4G274QA-TCE7 4GB 4Gb(DDP) 8 5-5-5 0801 No
More specifically:
DDP = Dual Die Packaging
Where you have two dies in same packaging (as opposed to the traditional one die in one packaging).
That is, two memory chip wafer pieces inside one packaging.
This is not the same thing as “stacked DRAM” (MetaRAM) which relates more to how you organize memory chip packages on a memory module.
http://www.freshpatents.com/Memory-system-dt20080131ptan20080025128.php
might be the “NetVault” line of products which CEO Hong has mentioned in Needham conference audio (which include onboard flash memory to backup memory module contents in case of power failure).
Sorry cut the last para out about NetVault.
I was half thinking that until appropriate google searches revealed DDP means something else.
Netlist,
It is not clear that staked and DDP are different. From,
http://en.wikipedia.org/wiki/Dynamic_random_access_memory
Stacked RAM modules contain two or more RAM chips stacked on top of each other. This allows large modules (like 512mb or 1Gig SO-DIMM) to be manufactured using cheaper low density wafers. Stacked chip modules draw more power.
Does this not mean DDP = 2 dies in same package = staked ?
Totally confused now.
Another item from Netlist webpage
http://www.netlist.com/technology/technology.html
>>While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency.
appears that DDP is same as staked ? No ?
would be dangerous if hypercloud is using staked technology ?
quote:
Stacked RAM modules contain two or more RAM chips stacked on top of each other. This allows large modules (like 512mb or 1Gig SO-DIMM) to be manufactured using cheaper low density wafers. Stacked chip modules draw more power.
Does this not mean DDP = 2 dies in same package = staked ?
An attempt at explanation of the terminology:
“die” – small stamp-sized piece of the shiny silicon wafer
http://en.wikipedia.org/wiki/Die_preparation
“memory chip” – die embedded within that black-plastic type stuff that people usually call a “chip” – has metal conductive pins coming out of it (shorter in the case of surface-mount chips).
“memory module” – that stuff you put in the memory slot of your computer – comprising a circuit board (maybe sophisticated many layered or including resistor/capacitors within it – as with NLST’s “embedded passives”). Circuit board has many “memory chips” on it (see above).
NLST technology lies not in “die”, or in “memory chip”. They buy the memory chip from Hynix and others (first NLST HyperCloud slated to use Hynix “memory chips”.
So Hynix is a “memory chip” manufacturer.
NLST is a consumer of those “memory chips” and a manufacturer of “memory modules”.
NLST combines “memory chips” so they fit on a “memory module”. This they do by IP (intellectual property/patents) that includes “embedded passives”, plus IP on how to place “memory chips” for even heat dissipation. That is, there is IP related to how you structure a “memory module” i.e. how you use those “memory chips” to construct a “memory module”.
In addition NLST has IP in extra circuitry that goes on the “memory module”. These are chips that NLST makes on it’s own – the buffer chip is a specialized ASIC for doing stuff with control signals, address lines and data lines that goto “memory chips” on the “memory module”.
In addition NLST has some circuitry for “load isolation” so they only connect (perhaps imprecise here) some set of “memory chips” to be visible at a time etc.
This is NLST’s purpose. They do not indulge in “memory chips” design, nor in “die” or wafer. They basically make complete “memory modules” that people can buy and put in their computer motherboard directly.
So NLST is a “memory module” maker, and it has IP to back that up. That IP relates to how the “memory module” is made/structured as well as all the EXTRA circuitry that they have put on that memory module.
MetaRAM was similar – they ALSO do (or did – now that they are prevented from doing so after settlement, and well .. bankruptcy).
MetaRAM ALSO has IP in load isolation, and in “memory chip” placement on the memory module. Since “memory module” is usually a standard sized piece of circuit board (albeit advanced circuit board), they have to come up with ways to fit more memory chips on there. Their “stacked DRAM” IP relates to THAT aspect.
NLST does not use “stacked DRAMs” because as pointed out above they feel it is inferior way of doing it.
Coming back to MetaRAM – so MetaRAM ALSO makes “memory modules”. In addition they were willing to work with Hynix and others to either sell them the memory modules (i.e. completed memory modules) for resale, OR they were willing to share their IP with Hynix and others so OTHER companies could also do something similar. This is the “royalty-based” model (i.e. instead of just making all memory modules yourself). This model is exemplified by RMBS. NLST has referred to it in the Needham conference audio.
quote:
we have strong IP which create competitive barriers as well as provide future avenues for a royalty based business model
The problem with MetaRAM was they don’t have IP in “embedded passives” etc. which means they not able to create more space on the same small circuit board as NLST can do.
They try to fit more “memory chips” by stacking them i.e. “stacked DRAMs” or other stuff to fit in more chips on the same “memory module”.
As noted above, that is not how NLST does it.
A second problem with MetaRAM was their IP is from much later – and it could be said is “derivative” or inspired by NLST IP. You have to ask yourself why a high flier like MetaRAM (with support from INTC and others – with Hynix and STEC and others all planning to use their IP/buffer chips) – why MetaRAM suddenly closed shop ?. Was it related to the GOOG/NLST lawsuit and was there some realization within MetaRAM. Why did MetaRAM say they only sold $37,000 worth of stuff and “destroyed” it (from MetaRAM court filings) – what’s the hurry to “destroy” stuff ?. MetaRAM was trying to minimize the potential for infringement penalty.
So basically NLST make and MetaRAM made whole “memory modules” – they did not make “memory chips” or “dies”.
Inphi is similar – except they may not even make the “memory module”, but just the buffer chips and allied circuitry so others can make it. Difference is they hold even less IP than MetaRAM. Inphi is a component maker – they make lots of different components. They were hoping to step in after MetaRAM dropped out.
So in summary:
Stacked DRAM refers to stacking “memory chips” – and is a way of arranging the “memory chips” on the “memory module”.
DDP – dual die packaging. This is when “memory chip” manufacturers like Hynix make “memory chips” with TWO dies in them.
So the “memory module” that NLST/MetaRAM make can include a normal “memory module” or a DDP “memory module”. They thus label their memory module specs with “DDP” or no DDP.
Hope this resolves the confusion between:
DDP – this is done by “memory chip” manufacturers like Hynix etc.
stacked DRAMs – this was done by MetaRAM in how it places those “memory chips” on “memory module”
They are two different things – relating to things that go on at two different scales – one within the “memory chip” black plastic packaging, and one on the “memory module” circuit board.
Hope this helps.
Slight correction to sentence above ..
quote:
So the “memory module†that NLST/MetaRAM make can include a normal “memory module†or a DDP “memory moduleâ€. They thus label their memory module specs with “DDP†or no DDP.
Should read:
So the “memory module†that NLST/MetaRAM make can include a normal “memory chip†or a DDP “memory chipâ€. They thus label their memory module specs with “DDP†or no DDP.
Thank you netlist. I think I am beginning to understand
the difference.
In a recent interview, NLST CEO said they strategically dedicated and spent over $10 million for R&D costs for products such as Hypercloud and NetVault. Not surprising that they are vigorously protecting investment in IP portfolio through negotiation and litigation as last resort. They seem to have facts and law on their side.
First, MetaRAM settled ‘386 patent infringement in December 2009, and agreed to cooperate and stop its infringement. Will MetaRAM’s cooperation include full disclosure of relevant customer lists including in Google’s case?
Next, Inphi must Answer by Feb. 11, 2010 to NETLIST’s Amended Complaint that added ‘912 and ‘274 patents in addition to initial allegations of ‘386 patent infringement. USPTO issued Patent 7,619,912, entitled “Memory Module Decoder” on 11/17/2009, and Patent 7,636,274, entitled “Memory Module with a Circuit Providing Load Isolation and Memory Domain Translation” on 12/22/2009. It appears Inphi hired a former employee of MetaRAM.
On Google’s Declaratory Relief on Non-Infringement of ‘386 patent, and Netlist v Google case for ‘912 patent infringement, a case management conference is set for 3/4/2010.
Is Google running out of time? As in above post, that judge Hon. Armstrong denied request to consolidate, and in essence said either settle or try the case on its merits, but no delays. Interesting to note that the Order strongly suggested parties “to engage in a settlement conference or mediation, sooner rather than later”. The judge also mentioned one of the reasons for denial was that the Court already held claims construction hearing and construed disputed claim terms. It seems that the Court ruled largely in favor of NLST patent claims construction. Same Court granted Netlist Discovery request to examine Google server. Subsequently, NLST sued Google alleging ‘912 patent infringement.
Right.
Also earlier, Judge Armstrong had denied GOOG fishing expedition “use of ‘386 patent prosecution history”. Though I can’t find that at the moment – but recall reading that somewhere.
quote:
First, MetaRAM settled ‘386 patent infringement in December 2009, and agreed to cooperate and stop its infringement. Will MetaRAM’s cooperation include full disclosure of relevant customer lists including in Google’s case?
Yes, it would be interesting to note the exact settlement in NLST/MetaRAM.
Even the possibility that IP beyond the one used by MetaRAM in retaliatory lawsuit could be compromised. However my gut feeling is that if the case had gone to jury trial and MetaRAM convicted, THEN that could have rendered suspect much of MetaRAM IP (even if sold to GOOG previously).
However with a settlement, there is no legal bar on MetaRAM (or those who bought from MetaRAM) – i.e. there is no enforcement – except what MetaRAM owns now. So the IP most relevant to NLST (which MetaRAM still held for use in retaliatory lawsuit) was handed over to NLST (or licensed with bar on any future buyer to use it against NLST).
On NLST’s investor SEC filing page, it shows report date of Feb. 11, 2009 for Renaissance Technology LLC owning more than 1.4 million shares as of Dec. 15, 2009.
Updates on NLST vs. Inphi.
Updates on NLST vs. GOOG.
We have Inphi answer in NLST vs. Inphi. Standard boilerplate answer – we challenge patents etc.
We have GOOG answer in NLST vs. GOOG. Standard boilerplate answer.
However there is some interesting information in the GOOG answer about goings on at the JEDEC meetings (specifically “JEDEC JC-45” committee meetings).
From GOOG answer we find that:
INTC presented their FB-DIMM quad-rank (4-rank) proposal in May, June, August and December 2007.
GOOG says that at June 2007 meeting, NLST representatives “withheld” information that they held patents in this area and or were in process for new patents.
At August 2007 meeting the same.
At December 2007 meeting GOOG says that NLST revealed that it held IP which may apply to the FB-DIMM and 4-rank/quad-rank designs.
However NLST was willing to provide access to that IP on RAND terms (as JEDEC members do as part of JEDEC).
http://en.wikipedia.org/wiki/Reasonable_and_Non_Discriminatory_Licensing
Reasonable and Non Discriminatory Licensing
On Jan 8, 2008, NLST inventor Bhakta sent letter to JEDEC offering RAND terms “but only identified the ‘386 patent” (which is normal).
This makes sense as the ‘912 patent is just a continuation of the ‘386. In superficial reading you cannot see any major difference between the two:
http://www.freepatentsonline.com/7289386.pdf
http://www.freepatentsonline.com/7619912.pdf
However, since NLST complaint has referred to ‘912 patent (representing the ‘386 patent thread), GOOG has chosen to just focus on the ‘912 while not addressing the ‘386 patent which NLST could add as easily to the complaint (or which implicitly is perhaps included since ‘912 is a superset of ‘386).
The answer by GOOG is reminiscent of some of the controversy in the RMBS/JEDEC tussle. There it was alleged that RMBS knew their designs were being standardized or in some cases they patented IP AHEAD of decisions by JEDEC (knowing that those areas will become valuable to JEDEC future direction).
The case of NLST/JEDEC is simpler – here NLST IP predates (March 2004) the JEDEC standardization. The ‘386 patent had been issued (and NLST had announced it to JEDEC) prior to the JEDEC members voting for the standard.
Also one of the inventors of 4-rank Bill Gervasi (while at NLST) later became JEDEC committee chair, as well as employee at SimpleTech.
http://www.discobolusdesigns.com/personal/gervasi_modules_overview.pps
Memory Modules Overview
Spring, 2004
Bill Gervasi
Senior Technologist, Netlist
Chairman, JEDEC Small Modules
& DRAM Packaging Committees
http://www.stec-inc.com/products/DRAM/4rank_DRAM.pdf
4 Rank DRAM Modules
Addressing Increased Capacity Demand Using Commodity Memories
Bill Gervasi, VP DRAM Technology, SimpleTech
Chairman, JEDEC JC-45.3
January 19, 2006
http://www.discobolusdesigns.com/personal/stec_atca_memory_20061017.pdf
Memory Modules for ATCA and AMC
Bill Gervasi
Vice President, DRAM Technology
Chairman, JEDEC JC-45.3
Note that NLST in it’s complaint also alleges leakage of it’s IP to JEDEC. Or by Texas Instruments ?.
I don’t know if Bill Gervasi is considered part of that leakage (that JEDEC benefitted from).
Some info on JEDEC’s JESD82-20A – FBDIMM Mode C proposed standard etc.:
http://www.jedec.org/download/search/JESD82-20A.pdf
http://www.jedec.org/download/search/JESD82-28A.pdf
JESD82-20A.pdf has the following disclaimer:
Special Disclaimer
JEDEC has received information that certain patents or patent applications
may be relevant to this standard, and, as of the publication date of this
standard, no statements regarding an assurance or refusal to license such
patents or patent applications have been provided.
http://www.jedec.org/download/search/FBDIMM/Patents.xls
JEDEC does not make any determination as to the validity or relevancy of
such patents or patent applications. Prospective users of the standard
should act accordingly.
The Patents.xls file is not available at that address now. However this demonstrates that there were IP infringement shadows cast on JESD82-20A standard.
So why did GOOG violate knowing those caveats existed ?
But for reference, here is the RMBS story:
http://en.wikipedia.org/wiki/Rambus
As can be seen their behavior was suspect in some cases – i.e. securing IP ahead of JEDEC decisions.
However they have prevailed in court despite those negatives:
http://www.mercurynews.com/business-headlines/ci_14224770
Rambus wins $900 million from Samsung
By Steve Johnson
sjohnson@mercurynews.com
Posted: 01/19/2010 05:22:07 PM PST
Updated: 01/20/2010 03:04:17 AM PST
It is now clear that NLST is claiming that FB-DIMM and 4-rank/quad-rank is infringing NLST IP.
FB-DIMM is a major part of memory design roadmap – which introduced serial signalling and the use of the AMB (buffer) on the memory module. The AMB buffer part is probably the infringing part of the JESD82-20A standard.
http://en.wikipedia.org/wiki/Fully_Buffered_DIMM
The JEDEC standard JESD206 defines the protocol, and JESD82-20 defines the AMB interface to DDR2 memory.
GOOG says there was no public info that ‘912 had also been filed (is this true – cannot search for in-process patent applications ?).
GOOG answer suggesting that JEDEC voted for standard finally despite knowing of NLST IP issues (with ‘386 patent if not ‘912 patent) – quote:
The Intel proposed changes to JESD82-20 were incorporated in JESD82-20A. The JEDEC members voted to issue the JESD82-20A standard, having all such JEDEC members, except for Netlist representatives, vote unaware of the patent application that led to the ‘912 patent.
GOOG claims that that approval (without NLST participation) went on while “unaware of” the ‘912 patent in process.
quote:
Netlist has affirmatively attempted to disclose the ‘386 patent as relevant to certain JEDEC standards, with knowledge that they had filed a continuation of that patent, which was to issue as the ‘912 patent.
GOOG says – quote:
Netlist’s silence as to the patent application that led to the ‘912 patent induced the other JEDEC members to rely upon that standard being free of intellectual property encumbrances. The JEDEC members, including Google, were without information regarding the pending patent application that was to issue as the ‘912 patent when JEDEC issued the JESD82- 20A standard.
What about the progenitor ‘386 patent ? That was already public information by that time. What “induced” JEDEC members (and GOOG being among them) to vote again for standard – knowing by then that it was conflicting with ‘386 patent (if not ‘912 patent).
Focusing on ‘912 patent is a bit of a red herring – because NLST has used ‘912 in the complaint (as the latest of the ‘386 patent thread).
What prevented JEDEC members from abandoning this standard (if they knew it was conflicted by then) ?
JEDEC approval of a standard does not reduce the burden of securing IP that supports that standard.
Why did GOOG go ahead with rampant use of NLST IP after that ? Ratification of standard at JEDEC does not automatically give people the right to use NLST IP – to do that you STILL have to negotiate for licensing terms (even if it is a JEDEC “standard” and even if they are expected to be on RAND terms).
Netlist,
Thank you for going through the goog response. However it does appear that
Joe Soleiman (sp ?) of netlist who is an inventor of 912 attended JEDEC meeting
and appeared to keep quiet about the 912 patent application. Hard for him to
claim that he did not know ? It would be useful for goog to get 912 thrown out
based on netlist cheating and focus only on 386. Don’t you agree with that strategy ?
Also any chance that Netlist gets thrown out of JEDEC (like Rambus) and loses the
opportunity to claim that their hypercloud modules are “JEDEC compatible” ?
quote:
However it does appear that
Joe Soleiman (sp ?) of netlist who is an inventor of 912 attended JEDEC meeting
and appeared to keep quiet about the 912 patent application. Hard for him to
claim that he did not know ? It would be useful for goog to get 912 thrown out
based on netlist cheating and focus only on 386. Don’t you agree with that strategy ?
In my cursory reading of the two patents i.e. ‘386 and ‘912 I couldn’t find any serious difference between them. I suspect the only reason NLST refers to the ‘912 patent is that it is the newest incarnation of the ‘386 patent thread.
Read through the two patents (links above) and see if there is any difference there.
I suspect GOOG was in a hurry to file an answer before the deadline. Lawyers are also new (with changing of law firms). It is possible they just put some boilerplate stuff and threw in the ‘912 specific comments fully knowing it doesn’t really save GOOG since they have not addressed the ‘386 patent (which is part of ‘912 patent thread – ‘912 being a continuation of ‘386).
‘386 patent was known to JEDEC. And disclosed by NLST in December 2007 (from GOOG timeline it seems after NLST did this, the JEDEC members STILL went ahead and finalized standard).
So why agree on standard despite finding out it infringes.
And why did GOOG initiate use of infringing IP without first securing licensing rights from NLST ?
There is something deliberate about this.
Plus we don’t have full information about the leakage at Texas Instruments (and possibly Bill Gervasi and others who had worked at NLST at time of invention and later worked at JEDEC or STEC).
In addition MetaRAM was patenting stuff throughout this period. Would have known about competitors’ IP.
INTC was doing FB-DIMM with 4-rank/quad-rank presentations at JEDEC – it is naive to think they didn’t do patent searches on this area ahead of time.
It is naive to think that just because NLST employees delayed “mentioning” patent continuation applications (‘912 was a patent continuation of ‘386 patent) which maybe routine in the industry, that that somehow impacts ability to know that ‘386 patent exists already.
quote:
Also any chance that Netlist gets thrown out of JEDEC (like Rambus) and loses the
opportunity to claim that their hypercloud modules are “JEDEC compatible†?
I am not sure if NLST is even in JEDEC anymore. Wouldn’t surprise me to know they are not part of the memory module committee.
There is a difference between RMBS and NLST – RMBS were radical designs which required major changes (and cooperation by motherboard makers and slot architecture etc. type of stuff). NLST is a plug and play (and requiring no BIOS update) solution that works with existing systems. So NLST has considerably fewer hurdles than RMBS – and RMBS has done well both in court and in licensing.
Netlist,
I think Netlist offered 386 patent to JEDEC (court filing says there was a letter from Jack Bhakka ?) so
maybe 386 was not issue for JEDEC approval ? Only licensing had to be negotiated with Netlist. However
looks like 912 does not have such letter so there is a difference in disclosures ?
Texas Instruments and Gervase is interesting. Do you have any thoughts on how netlist can use
that for more $$ ?
quote:
I think Netlist offered 386 patent to JEDEC (court filing says there was a letter from Jack Bhakka ?) so
maybe 386 was not issue for JEDEC approval ? Only licensing had to be negotiated with Netlist. However
looks like 912 does not have such letter so there is a difference in disclosures ?
Texas Instruments and Gervase is interesting. Do you have any thoughts on how netlist can use
that for more $$ ?
Basically JEDEC knows standard falls awry of NLST ‘386 patent.
Same for GOOG – which actually went ahead and used it – without paying a dime to NLST, or bothering to discuss it with NLST. GOOG may even have been the mover behind the other players to bypass NLST (if NLST claims are right – i.e. encouraging others to infring NLST IP i.e. companies it dealt with, encouraged to manufacture the memory and maybe even MetaRAM).
You realize GOOG complaining “we didn’t know of ‘912 patent application” has the response that NLST just reverts to ‘386 patent in it’s arguments.
This is what I was saying – that GOOG nitpicking on the “we didn’t know about ‘912” is suggestive that they have no substantive argument against ‘386 patent and they make that case as a short-term strategy. That is, biding time till settlement or what ?
quote:
Texas Instruments and Gervase is interesting. Do you have any thoughts on how netlist can use
that for more $$ ?
I am not sure about Bill Gervasi – I dropped that name just to highlight that there is considerable promiscuity in this niche area with people moving from one company to another company and JEDEC etc.
So the argument that people are not aware of patents is misleading (esp. after what happened with JEDEC/RMBS). Although initial patent applications may not be visible to others.
Gervasi the “inventor” of “4-rank” (he is one of three patent authors) later went to work for STEC and chaired the JEDEC committee as well.
NLST vs. Texas Instruments is under the radar – perhaps because one needs to goto the court to get the documents (or have them mailed). Couldn’t find it on PACER.
http://www.faqs.org/sec-filings/091103/NETLIST-INC_10-Q/
NETLIST INC – FORM 10-Q – November 3, 2009
…
Trade Secret Claim
On November 18, 2008, the Company filed a claim for trade secret misappropriation against Texas Instruments (TI) in Santa Clara County Superior Court, based in TI’s disclosure of confidential Company materials to the JEDEC standard-setting body. On February 20, 2009, TI filed its answer. The parties are currently engaged in settlement discussions. If those discussions are unsuccessful, the Company expects to vigorously pursue its claims against TI.
Court website:
http://www.sccaseinfo.org/civil.htm
Search – enter “netlist” for business name.
Gives the result:
Netlist, Inc. 1-08-CV-127991 Netlist, Inc. Vs Texas Instruments, Incorporated Intellectual Property – Unlimited
Which leads to:
http://www.sccaseinfo.org/pa6.asp?full_case_number=1-08-CV-127991
However it doesn’t seem to be on PACER, and seems to require going to court to get documents copied (or via mail).
Maybe someone will be interested enough to get a copy of the documents (will shed light on the goings on at JEDEC/Texas Instruments).
This seems to be an example of TXN (Texas Instruments) doing something similar to what Inphi is doing. That is, a buffer chip for “quad-rank/4-rank”.
So does this make TXN similar to Inphi and others – i.e. if “4-rank/quad-rank” is what makes it infringing ?
http://news.thomasnet.com/fullstory/818452
DDR3 Register is designed for memory modules.
DALLAS (April 15, 2008) – Texas Instruments (TI) (NYSE: TXN) today announced the industry’s first full production release of a phase locked loop (PLL) integrated DDR3 register for registered dual in-line memory modules (RDIMMs). This device enables system stability through constant clock and output delay over voltage and temperature variation. The single-chip quad rank support saves overall board space and reduces power consumption in servers, work stations and storage equipment. (See http://www.ti.com/sn74ssqe32882-pr.)
…
TXN’s SN74SSQE32882 datasheet:
http://pdf1.alldatasheet.com/datasheet-pdf/view/250017/TI/SN74SSQE32882.html
http://focus.ti.com/docs/prod/folders/print/sn74ssqe32882.html
SN74SSQE32882 Status: ACTIVE
JEDEC SSTE32882 Compliant 28-Bit to 56-Bit Registered Buffer with Address-Parity Test
With Netlist announcing volume production and shipment of its leading productrs and generating cash flow, Netlist should continue extensive and vigorous discovery on Google to prosecute what NLST claims is willful patent infringement. If Netlist wins, this case would be a landmark case and Google’s stellar image and pocket book may be damaged. Such a win would promote smaller companies’ investment in R&D and innovation as proprietary technology for volume production and generate solid ROI like Netlist is attempting to do. Apple, HP and Dell appears to have recognized the value of Netlist as an innovative company in contrast to Google’s apparent assessment.
We should expect discovery dispute motions to reveal the extent of Google’s conduct. Google would undoubtedly file protective orders from having to turn over such documents. Since MetaRAM settled with Netlist, Google may be unable to deny meetings, collaborations, or claim non-existence of certain documents.
If there are settlement discussions, Netlist should consider that after a full disclosure from Google. It would be particularly interesting to know the extent of collaboration with MetaRAM, Intel, Inphi and others, if any.
If Netlist prevails after a jury trial, how much would the valuation expert articulate to the jury? If punitive or exemplary damages are awarded, will that have relevance to Apple’s litigation with HTC and implication to Google?
Both Hypercloud and NetVault product lines should generate substantial cash flow for several years to come as in below link. http://www.netlist.com/investors/investors.html
IRVINE, Calif., Feb. 17 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST), a designer and manufacturer of high-performance memory subsystems, today announces that a major OEM has commenced volume consumption of NetVault-NVâ„¢, a flash memory based non-volatile cache memory subsystem targeting RAID (redundant array of inexpensive disks) storage applications. NetVault-NV offers disaster recovery backup from system power failures and is optimized to ensure high system availability in RAID systems without using battery power for backup.
Simultaneously, within the same product family, Netlist also announces that it is also at mass production status with a major OEM with its NetVault-BB (DRAM memory) based product, Netlist’s third-generation RAID cache solution. NetVault-BB offers disaster recovery backup from system power failures and ensures high system availability.
NetVault-NV provides server and storage OEMs a solution for enhanced datacenter fault recovery, reduces system downtime and total cost of ownership. Unlike traditional fault tolerant cache subsystems, which rely solely on batteries to power the cache until the IT manager can recover business critical data and restore the system, NetVault-NV utilizes a combination of DRAM for high throughput performance and Flash for extended data retention. With NetVault-NV, data is retained for years following a disaster versus traditional subsystems, which often cannot preserve cache data for more than 24 to 72 hours.
“NetVault-NV delivers the reliability and performance demanded by end users while reducing the total cost of ownership for this high reliability disaster recovery solution by eliminating the need for battery backup power,” said C.K. Hong, President, and CEO of Netlist. “As the only available flash based merchant market solution, NetVault-NV is also now entering volume production with a tier one OEM. Joining our recently announced HyperCloud product line, the NetVault family further demonstrates Netlist’s leadership in solving critical problems for the datacenter through innovative value-added memory subsystems.”
This is LR-DIMM, move along, nothing to see here.
Will Google capitalize competitively by being the plaintiff and dragging this court case out as far as possible in order to freeze industry acceptance of Hypercloud technology or will it admit an error in assessment of Netlist’s intentions and settle so that Netlist can go
about its business of innovation. The latter would surely help other Google competitors become more efficient, cut cost and become more green. Maybe that’s Google’s strategy to stay ahead. Dragging this case out would be a cheap way to stay ahead of the competition. It could be a sign that Google is running out of original ideas and has no better business morals than any other big business. What ever it takes to stay ahead of the game.
Hi Auditor,
It would be interesting if this case went to a jury, but the vast majority of civil cases end with settlements.
At the heart of this dispute isn’t the quality or value of the products that Netlist has to offer, but rather whether Google has infringed upon their patented technology. I don’t know the answer to that, but then again, I’ve never claimed to have any particular expertise in memory modules. I also don’t know what impact a legal victory over Google would have to how potential customers might treat Netlist in the market, or the impact on Apple’s litigation with HTC.
Hi Billy,
I wrote this post about Google’s unusual acquisition of a large amount of memory module technology from Metaram. The purchase is still somewhat of a mystery.
I did take a look on the 25th last month in a Second Amended Joint Case Management Conference Statement (pdf) that I viewed, and it referred to “four-rank FBDIMMs” as being at the heart of the dispute. The conference was scheduled for March 4th, and I haven’t gone back yet to see what might have happened during that conference.
I am interested in why you state that this dispute involves LR-DIMM, and what the implications of that would be if it did. I’m also not sure what interest you have in telliing visitors to this blog “Nothing to see here. Move along.” If you’d like to share, I’d be happy to listen. Thanks.
Hi spencity.
Possibly, but if I were in Google’s shoes, I would hate having to rely upon attempting to delay pending litigation as a tactic for hindering competitors.
I do see a lot of the patents and whitepapers that Google publishes, and try to keep up with the technology that they hint at and release in other areas. It does seem like they still have more than a couple original ideas coming out of their Mountain View headquarters.
Minor update on court cases.
NLST vs. GOOG:
– NLST filed answer to GOOG counterclaims in NLST vs. GOOG – nothing special there, just that NLST saying GOOG can’t claim NLST did this or that, when GOOG CONTINUED to violate NLST IP even after knowing about it, when JEDEC was told the new JEDEC standard violated NLST IP.
NLST vs. TXN:
– NLST filed some papers in NLST vs. TXN (Texas Instruments) – can’t get to it via PACER. Would be interesting to the contextual information in that case.
http://www.sccaseinfo.org/pa6.asp?full_case_number=1-08-CV-127991
NLST vs. TXN
quote:
Will Google capitalize competitively by being the plaintiff and dragging this court case out as far as possible in order to freeze industry acceptance of Hypercloud technology or will it admit an error in assessment of Netlist’s intentions and settle so that Netlist can go
about its business of innovation.
quote:
It would be interesting if this case went to a jury, but the vast majority of civil cases end with settlements.
The case is unlikely to go before jury trial – and may end much earlier because discovery will start to tackle issues with GOOG e-mail and what employees knew and when knew. If GOOG is going to settle, there is no point waiting.
Meanwhile by “dragging it out”, it does no harm to NLST OR to the competition using NLST memory (I also doubt that is the way GOOG will go about it – since it is a negative way of doing things which does not fit GOOG culture and is a defensive measure rather than a proactive measure).
The reason is that NLST is not waiting for JEDEC approval, or some “industry approval for a certain format” to “gain acceptance” – NLST HyperCloud memory modules are “plug and play”, require no BIOS update, and require no changes to the motherboard (like CSCO’s UCS strategy has motherboard changes to accomplish something similar).
So GOOG delaying it’s approval just deprives NLST of business at GOOG. But it also delays GOOG ability to participate in NLST HyperCloud memory qualification and participation. With MetaRAM (the biggest player with industry support from Intel and others) gone, there are few options available in the market (apart from building their own memory as GOOG seems to have been doing ? or were they using MetaRAM ?). GOOG use of NLST HyperCloud is a natural fit, and GOOG would deprive ITSELF of opportunity to use it (while prolonging their infringing NLST IP), while all the competition could start using it right now.
In addition, in the event of a failure, GOOG loses face, loses the case, treble damages, humiliation among the “do no evil” crowd, evisceration of old infringing memory from GOOG servers, perhaps even court audit of GOOG servers to see which are infringing which are not (to calculate damages).
In fact GOOG gains FAR more by doing a nice deal with NLST. Probably will get excused for using infringing memory and can continue to use that – so no disruption to GOOG servers either.
To be noted is GOOG’s under the radar behavior in this matter as well – the matter of who made the memory for GOOG – NLST accuses GOOG of “rallying the troops” in infringing NLST IP – and GOOG MAY have played a part at JEDEC rallying memory module makers to ignore NLST IP violations.
It is intruiging how MetaRAM claimed they had “destroyed” the infringing memory products and it was worth only a small amount. This was very odd, given they were the darlings of the industry – INTC and other supporting, they were to supply major memory module makers with the buffer chips.
So the story is not as crystal clear regarding GOOG’s behavior – possibly due to the conflict created by a hardware division that thought it could do things inhouse without caring about NLST IP.
So in summary – in stark contrast to what USUALLY happens in such cases i.e. big company stringing small company along until they fail – THAT type of leverage is NOT available to GOOG in this case.
Time is not on GOOG’s side, both in terms of escalation of severity, but also in terms of loss of access to NLST HyperCloud (which is a match for high memory-loaded systems). Using HyperCloud allows this niche area (servers running virtualization needing lots of memory, search function as at GOOG and such applications) to reduce the hardware you need (i.e. cases where CPU is too powerful, but not enough memory – so new server farms cannot capitalize on new CPUs with multiple cores if are limited by the memory loading capability of current systems – can’t load memory without reducing achievable speed – HyperCloud allows to bypass that problem).
HyperCloud helps in:
– reducing power requirements (memory module turns off banks that are not in use on the memory module)
– greater memory-loading at same bandwidth (as you load memory, electrical issues limit the max speed achievable)
– costs less – NLST uses “lower dollar per bit” memory chips to emulate “higher dollar per bit” memory chips. That is, can make a 16GByte module using 2GBit memory chips. If you don’t use the NLST IP, you have to use 4GBit memory chips. Since memory pricing is not linear it is a major cost savings for NLST – 4Gbit memory chips cost MORE than 2x what a 2Gbit memory chip costs.
In TXN case, I noticed that the ex parte motion from Netlist was granted. Ex parte motion needs special urgent circumstances. That would be interesting to know what TXN is trying to compel NLST to produce.
Today’s volume peaked over 8 mil shares which is curious. There has been no substantiated rumors. It would help to know which major OEM is doing volume consumption or signed for volume production. Cisco or Google seems far fetched at this point. Any recent transactions with Intel, AMD, Apple, Hynix, Samsung?
quote:
Today’s volume peaked over 8 mil shares which is curious. There has been no substantiated rumors. It would help to know which major OEM is doing volume consumption or signed for volume production. Cisco or Google seems far fetched at this point. Any recent transactions with Intel, AMD, Apple, Hynix, Samsung?
There was a rumor today about CSCO/NLST linkup. The relations is more combatitive though since NLST solution-on-memory-module trumps CSCO’s UCS strategy (where they have to modify the motherboard to do the same thing – thus also rendering it non-standard).
Here’s a much-repeated article which compares the two:
http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in Servers, 11th November 2009 18:01 GMT
It also doesn’t help that NLST has extremely small float – by calculation on yahoo board, it is like 4.79M to 7.13M that are unaccounted for if you exclude management and fund holdings. Total outstanding shares nearly 20M.
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=13357&mid=13357&tof=1&frt=2
analysis of NLST float (from recent filings)
Of those 4.79-7.13M shares, subtract the smaller shareholders and the ones who post on message boards and you have substantially fewer shares available.
Just more reiteration of background info.
http://www.wsw.com/webcast/needham35/nlst/
Needham Growth Stock Conference
January 14 at 2:30 pm ET (Jan 14, 2010)
Corporate Presentation
Needham Conference January 2010
http://www.b2i.cc/Document/1941/Netlist_Needham_Conf_Jan_2010.pdf
In this Needham presentation, CEO Hong outlines the factors at play:
Servers traditionally have 24 sockets, most with 18 sockets.
If populate half of those sockets i.e. 9 sockets, your achievable memory speed goes down from 1333 MHz to 1067 MHz. The more memory you add the LOWER the achievable speed.
So for modern CPUs with multiple cores, it is the MEMORY which is holding it back.
NLST HyperCloud allows to pack all the sockets and STILL run at 1333 MHz.
From presentation above – get “100% more capacity at 66% higher memory speed” (probably with all sockets full, competitors run at 800 MHz, so 1333/800 = 1.66 – so 66% higher memory speed).
HyperCloud also uses register and isolation devices to shut down power to certain DRAM devices when not in use, thus reducing power. For heavily memory-loaded servers, memory power consumption is significant.
HyperCloud “tricks” the system into thinking of two 1Gbit memory chips as a 2GBit memory chip. Thus using “lower dollar per bit” memory chips instead of higher ones. PLUS this advantage always remains with NLST – as new higher density memory chips arrive, NLST can use the second tier memory chips (which will always be cheaper) to achieve same performance.
HyperCloud is plug and play, requires no BIOS changes, and not changes to motherboard (which WOULD have required dealing with JEDEC and standardization and getting partners on board and such complications). HyperCloud requires no such thing – and can be used interoperably with regular memory.
CEO Hong says done studies with OEMs – double memory capacity which increases efficiency by 50% – fewer serers required to do the same job in data centers/cloud computing.
Heavy memory-loaded servers – memory cost is significant, so using cheaper memory is advantage.
Billy,
“3 Advantages of Designing with Micron LRDIMMs”. That statement was made on a Micron Technology promotional web page soliciting OEMs to use their new DOA LRDIMM memory module. That seems to imply that LRDIMMs are not plug n play with current jedec standards, but would complete in a market with Hyper Cloud which is plug n play. I say DOA, because Micron is asking OEMs to design their systems around the memory modules and set new standards that Mircron would need. HyperCloud is plug and play. OEMs can spend their development budget on increasing bus speeds rather than accommodating new standards. I wonder what HyperCloud’s max speed realy is since it can handle todays max bus speeds according to Netlist. Also, Mircon is using buffer chips from Inphi that Netlist has contested as infringing on their IP and is under litigation. The IP enabling speed and efficiency along with 4 rank memory addressing are all claimed by Netlist. The courts may have the final say, unless there is a settlement before hand. Micron seemed to be well into development late 2009 until the Netlist vs Inphi case was initiated. Where is Micron in development of their LRDIMM modules. Is that the “Nothing” to which you are eluding?
Current servers can enjoy significant upgrades in efficiency and power by simply switching to HyperCloud. Can LRMDIMMs claim that?
HyperCloud benefits usually if have heavily memory-loaded systems.
On home-built systems you can overclock the computer and use higher speed memory also I guess.
However the area HyperCloud is targeting is populated by speeds starting at 1333 MHz but decaying rapidly as you fill up memory slots.
As outlined in above March 9 post, for 18 memory slot motherboards at 1333 MHz speed, as you fill 9 slots you are forced to run at 1067 MHz. When you fill all 18 slots, you have to run at 800 MHz (which is the “66% higher memory speed” figure which NLST quotes).
So for high memory-loaded servers, the advantage of having a fast CPU is negated as you add more memory.
NLST allows you to run with all 18 slots at the full 1333 MHz.
The power efficiency and cost advantages are additional.
Routine settlement conference in GOOG vs. NLST set for April 30, 2010 at 9:30am in front of Magistrate Judge (Elizabeth D. Laporte). Laporte was the judge both GOOG/NLST agreed on after Judge Trumbull was not available.
quote:
It is not unusual for conferences to last two to three hours or more. No participant in the settlement conference will be permitted to leave the settlement conference before it is concluded without the permission of the settlement conference judge.
Parties are encouraged to participate and frankly discuss their case. Statements they make during the conference will not be admissible at trial to prove or disprove liability in the event the case does not settle. The parties should be prepared to discuss such items as their settlement objectives, any impediments to settlement that they perceive, whether they have enough information to discuss settlement and, if not, what additional information is needed and the possibility of a creative resolution of the dispute.
I keep on feeling like some kind of surprise is going to jump out of this litigation. Not sure exactly what, but that’s the feeling I have.
Did anyone here get a copy of the transcript from the Roth OC Growth conference yesterday? If so can you post it somewhere?
I tried listening to the audio but the quality was terrible.
Hi Fallguy,
I don’t know if anyone publishes transcripts from that particular conference, but for anyone interested in trying to listen, I’m guessing it might be possible to sign up and do so from the Roth web site:
http://www.roth.com/main/page.aspx?PageID=7228
It would be amazing if Netlist did not search for other partners after they were rebuffed by Google. They are a very ambitious company given the development of their recent products. They could be pursuing multiple paths to growth. Their HyperCloud module has widened data processing bottle neck. Its maximum speed may be greater than 1333MHz. Its possible that the same Netlist IP used to speed up a memory data bus could be used elsewhere in a digital circuit hereby increasing data processing/ routing speed. Mr. Chung Ki Hong was once president and CEO Infnilink that manufactured routers and other data equipment. I wonder if that was the bases of the rumor started earlier this week that Netlist collaborated with Cisco in the development of their new super router? Just food for thought. Facts are hard to come by until official news releases.
It is possible that CSCO is using NLST IP for loading lots of memory into the router they produce, however that would require conventional motherboards being used (don’t know if they are in that product), and a longer standing relationship between CSCO and NLST than we are aware of.
Secondly CSCO UCS strategy is neutered by NLST HyperCloud – CSCO UCS server have motherboards with the same type of circuitry except it is in the motherboard, thereby allowing greater loading of memory. NLST has all that on the memory module itself, thereby making them plug and play and not dependent on any widespread adoption of newer standard for motherboard etc.
So they are competitors in that aspect at least. Don’t know how that affects the rumored collaboration story.
My own feeling is that the rumor is misleading – i.e. someone expecting something from NLST and it got ascribed to CSCO radical new product announcement.
Or it could be that CSCO is using NLST’s NetVault product – since routers include battery-backed RAM to store configuration information, it could be that they have chosen NLST’s NetVault product to do that instead of the standard battery-backed RAM. NLST’s NetVault backs up the memory information in the volatile RAM to flash memory (located on the same memory module) – the power required to do this quickly after power failure is supplied by a “supercapacitor” instead of batteries (thus reducing periodic replacement of batteries by on-site personnel).
NLST NetVault could be very useful in products that are consumer-oriented – since while battery-backed stuff worked for data centers where they have personnel, there would be consumer applications which may open up once manufacturers know they CAN design products that don’t require battery maintenance to retain integrity of the device (i.e. manufacturer’s startup settings).
Bill
I got the transcript. Completely useless and unreadable. What I gathered was
pretty much no new news – no OEM orders or any other news on litigation (at
least in the transcript) !
quote:
I got the transcript. Completely useless and unreadable. What I gathered was
pretty much no new news – no OEM orders or any other news on litigation (at
least in the transcript) !
That must have been one piece of creative writing (given the audio was so hard to decipher).
Here is a bit of the transcript:
And also a proceeding of major – operation to our department on the resulting on – the post office and important to maintain that – and we feel also – and the worldwide combined the – by holding certain of that and many to both number of – dollar material future to the proliferation of our computing and revolving traffic. And we are going to be – the workforce advice business makes anything. Do you think to create a huge computing and message do well?
On Planet Earth,
What worries me is no news of successful evaluations or status at OEMs.
Netlist – is this delay with no news reasonable ? Or is there some problem with the hypercloud solution ?
netlist
Does Netlist have known relationships with any top tier OEMs?
In conference audio, CEO Hong has mentioned “customers historically been”:
IBM
DELL
HP
Their SEC filings have mention of these OEM relationships.
In addition NLST used to supply memory to AAPL – so was considered good quality producer of memory.
The Register has come out with a newer piece on NLST – this time calling the raising of $15M as indicative of Wall Street confidence in the company’s technology:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=14629&mid=14629&tof=1&frt=2
Netlist’s HyperCloud memory gets Wall Street’s blessing
Raises $14.1m in stock sale
By Timothy Prickett Morgan
Posted in Financial News, 23rd March 2010 06:02 GMT
Sorry that was the yahoo board thread discussing that article.
Here’s the direct link:
http://www.theregister.co.uk/2010/03/23/netlist_public_float/
Netlist’s HyperCloud memory gets Wall Street’s blessing
Raises $14.1m in stock sale
By Timothy Prickett Morgan
Posted in Financial News, 23rd March 2010 06:02 GMT
You will recall the earlier piece by theregister which is probably the best explanation so far for NLST HyperCloud:
http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in Servers, 11th November 2009 18:01 GMT
quote:
In addition NLST used to supply memory to AAPL – so was considered good quality producer of memory.
This was before NLST did the restructuring (i.e. deemphasizing conventional memory which was a lossmaking operation for most memory producers at that time) and went on that prolonged effort to create what we now know as HyperCloud and NetVault – and the move to Suzhou, China (which seems to be a memory producing hub).
NLST states that nearly 50% or so of their customers are in China – whether that is China-China or HP or other OEMs factories in Suzhou is unclear (since many U.S. companies operating in China). Reason for shift to Suzhou given was also that “we needed to be closer to our customers” or something like that.
quote:
Reason for shift to Suzhou given was also that “we needed to be closer to our customers†or something like that.
And reduction in production cost.
Probably easier to source memory chips also – since most of the memory producers are there as well – Hynix and others.
NLST HyperCloud will be using Hynix memory chips (in the first batch at least).
Updates on GOOG vs. NLST.
From the following thread on NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=14733&mid=14811&tof=1&frt=2#14811
Re: update on the various court cases 2 .. GOOG infringing memory
We now know what types of infringing memory GOOG was using and who their contract manufacturers were (from court filings in GOOG vs. NLST – dockets 113 and 114).
NLST has outlined the info collected from the depositions they have been taking from GOOG employees.
From docket 113 (113-1):
12. Through the 30(b)(6) testimony of Google, Netlist was able to learn several
key pieces of information that inform and serve as the basis for the proposed infringement
contention amendments. I took both depositions. Because Google designated the transcripts
as “Confidential- Attorney’s Eyes Only,†they are not being submitted herewith to avoid the
necessity to file them under seal. The corporate testimony that Netlist was finally able to
obtain from Google during February and March 2010 included the following:
• the identity of the different 4-Rank FBDIMMs supplied by Google and
their part numbers;
• the specific serial signal protocol used by Google’s “logic elementâ€
component of the accused 4-Rank FBDIMMs (called an “Advanced Memory Buffer†or
“AMBâ€) and the manner in which the logic element is informed about the rank to which
command and address signals are to be directed;
• the maximum number of memory ranks to which control and command
signals received by Google’s 4-Rank FBDIMMs may correspond;
• Google’s use of infringing eight gigabyte (“8GBâ€) 4-Rank FBDIMMs
and 2GB 4-Rank FBDIMMs in addition to 4GB 4-Rank FBDIMMs;
• the manner in which Google’s AMBs generate output command signals
such as row address strobe (“RASâ€) signals, column address strobe (“CASâ€) signals, and
write enable (“WEâ€) signals to a selected rank of memory to execute DRAM commands such
as read, write, refresh, precharge, etc.;
• the specific AMB part numbers and suppliers used by Google;
• the identity of the contract manufacturers who have assembled 4-Rank
FBDIMMs for Google;
• Google’s receipt of a letter from Netlist to JEDEC in January 2008
which identified the ‘386 Patent and its relationship to the JEDEC AMB Quad Rank Support
Standard that Google admits to practicing and Google’s actions in response to the Netlist
letter;
• Google’s admission that the AMB is a form of an application specific
integrated circuit (ASIC);
• Google’s use of edge connectors on its 4-Rank FBDIMMs to connect
the modules to memory slots in its servers.
Regarding pg. 4 (see below) mention of “Ilium” and “Icarus” servers, don’t know if this is a class of server, or are the specific servers that GOOG was asked to turn over to NLST for examination (which led to discovery of “Mode C” usage – being smoking gun for “4-rank/quad-rank” use).
Searching google for those server names, turned up this:
http://ruscoe.net/google/google-subdomains-internal/
Google’s Internal Subdomains
…
icarus.corp.google.com (7 May 2007)
ilium.corp.google.com (7 May 2007)
…
Regarding pg. 7 (see below) we see some of the contract manufacturers:
Unigen
Southland Microsystems
Kingston
Qimonda
Entorian
The timeline while known before, is spelled out again below – it shows that GOOG was caught in the headlights when NLST addressed it, and rather than deal with it, went to court instead to create an orderly environment for settling this issue (which is behind the server technology GOOG is using).
From docket 114 (114-4) for case GOOG vs. NLST:
(pg. 4)
… Based on the information presently known to Netlist, the Accused Instrumentalities include memory modules bearing the following names and/or model numbers:
1) 4-Rank, 2GB FBDIMMs: iGooFMM2, 07000752, 07000753, 07000754, 07000755, 07001780, 07001853, 07001834, 07002739, K17000752-753, K107000752-754 K107000752-755, S107000752-780 G107000752-853, G107000752-854, S107000752-739 S107000752-854:
2) 4-Rank, 4GB FBDIMMs: iGooFMM44, 07000763, 07000764, 07000765, 07000766, 07001779, 07001852, 07002028, 07002028, 07002255, K107000763-764, K107000763-765, K107000763-766, S107000763-779, Q107000763-852, G107000763-028, Ql07000763-255, S107000763-028, iGooFMM4LP, 07005903, S107000763-xxxLP, and
3) 4-Rank, 8GB FBDIMMs: GooFMM8, GooFMM8Q, 07002964, 07002970, QZ07002964-970.
Google infringes the ‘386 Patent as follows:
1. By making, using and/or importing the Accused Instrumentalities, including by operating computer servers known as “Ilium” and “Icarus” in which the Accused Instrumentalities have been installed.
(pg. 5)
4. By supplying components of the Accused Instrumentalities–including DRAM chips, quad-rank supported advanced memory buffers, and printed circuit boads–to third party contract manufacturers who make 4-Rank FBDIMMs for Google’s use, at Google’s request, and as instructed by Google, Google is actively inducing infringement ..
(pg. 7)
… The contract manufacturers include Unigen, Southland Microsystems, Kingston, Qimonda and Entorian. Google has instructed the contract manufacturers in the assembly of the Accused Instrumentalities with knowledge of the ‘386 Patent and intent of causing the contract manufacturers to directly infringe the ‘386 Patent.
(pg. 8)
(h) Basis for Assertion of Willful infringement (PLR 3-1(h))
The basis for Netlist’s assertion of willful infringement includes the following:
During 2007. Google had 4-Rank FBDlMMs assembled for it by its contract manufacturers and used the modules in its data center computer servers.
Google was aware that the FBDIMMs were quad-rank enabled and understood that the modules utilized a quad-rank enabling method set forth in an Intel proposal called “AMB Quad Rank Support.”
During late 2007, Google attended one or more JEDEC meetings at which Netlist announced that it may have intellectual property that might cover the Intel AMB Quad Rank Support proposal. Nevertheless. Google made no inquiry about the Netist IP and abstained from voting on the Intel proposal to conceal its use of quad-rank enabled AMBs from other JEDEC members.
In January 2008, Google received a letter from Netlist to JEVEC which identified the ‘386 Patent by number and which stated that the 386 Patent might be required to implement “Mode C” of the AMB Quad Rank Support standard. Google has admitted to using Mode C in its 4-Rank FBDIMMs.
In May 2008. Netlist wrote to Erick Schmidt, Chairman of the Board and Chief Executive Officer of Google, and informed Mr. Schmidt that Netlist had reason to believe that Coogle was using the technology claimed in the ‘386 Patent in its computer servers.
Along with the May 2008 letter, Netlist provided Mr. Schmidt with a copy of the ‘386 Patent. Google never responded to the May 2008 letter.
On June 4, 2008. Netlist’s counsel wrote to Mr. Schmidt again, stating that that Google had not responded to the May 2008 letter and again identifying the ‘386 Patent to Mr. Schmidt. A copy of the May 20 letter and its attachments accompanied the June 4. 2008 letter.
Google did not respond to the June 4. 2008 letter. Thus on June 19, 2008, counsel for Netlist again wrote Mr. Schmiot and again attached the May 2008 letter and a copy of the ‘386 Patent.
In July 2008, Netlist met with Google and made a presentation to Google’s counsel describing the ‘386 Patent and its relationship to the JEDEC AMB Quad Rank Support Standard. On August 28. 2008, Google filed this lawsuit.
Despite being put on repeated notice concerning the ‘386 Patent and the infringement of its 4-Rank FBDIMMs, Google continued to have infringing modules made for it and continues to use them even now. There is no evidence that Google has obtained an opinion of counsel of non-infringement or invalidity or that it has taken any step to avoid infringing the ‘386 Patent.
Netlist:
Great job as usual.
Given the delayed response by GOOG and the type of response they chose, I surmise that GOOG was using so many suppliers and the use of NLST IP was so extensive and that they not only needed time but that a stoppage of the use of the NLST IP would have been detrimental to the everyday operations of GOOG.
That said, Since GOOG would be one of, if not, the biggest customer’s of this NLST IP that a cash settlement and additional considerations could be realistic.
Do you see any chance of a settlement announcement prior to the hearing on the 30th?
quote:
Do you see any chance of a settlement announcement prior to the hearing on the 30th?
I suppose that is possible, but no way to know.
If I were GOOG I would settle early and get moving with NLST HyperCloud.
GOOG lawyers may have other ideas though.
Thank you for the updates, netlist.
It’s hard to tell where this might eventually end up going, even with the facts that have come out so far.
Not directly related to the NLST/GOOG legal thread here, but continuing earlier post above about GOOG data center construction:
https://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-237024
here is a video presentation of GOOG data center:
http://www.greentechies.com/blog/2009/04/08/video-googles-energy-efficient-data-center/
Video: Google’s Energy Efficient Data Center
quote:
CNET’s Stephen Shankland collected a bunch of Google produced videos on YouTube that discuss Google’s energy efficient container data centers…
YouTube tour reveals Google data center designs
netlist,
The bullet points that you posted seem to strongly support Netlist’s arguement and gives Google little hope of winning a judgement. Are you aware of any testimony or evidence presented by Google that would give thim a credable chance of obtaining a judgement in their favor?
netlist
The bullet points I refered to in the previous message is from docket 114 (114-1)
Hmm .. the previous post is confusing, reposting it below with some separators
quote:
The bullet points that you posted seem to strongly support Netlist’s arguement and gives Google little hope of winning a judgement. Are you aware of any testimony or evidence presented by Google that would give thim a credable chance of obtaining a judgement in their favor?
Docket 113 is where NLST amendment to charges against GOOG (thanks to facts unearthed in discovery – part numbers, contractors GOOG was using etc.).
From docket 113:
—–
(pg. 8 )
Specifically, a significant part of the non-public material discovered by Netlist
was obtained from the Rule 30(b)(6) deposition of Google’s staff design engineer, Rob
Sprinkle, whom Google promised to produce at the beginning of January yet ultimately
refused to produce until February 18, 2010. Hansen Dec. at ¶ 11. As described above,
Google also was “unable†to provide key information in response to Netlist’s written
discovery concerning the structure and operation of the accused 4-Rank FBDIMMs, and
refused to provide any specifics in its non-infringement contentions.
—–
Circumstantially, GOOG behaviour has been one of evasion. For example they did not answer NLST challenge with even a counter claim, but instead went rapidly to court (GOOG vs. NLST). It is possible that GOOG thought the case was indefensible (cannot give any answer) and so they took it to the court forum immediately.
This excerpt gives some sense of recent happenings, and you can get a flavor for the types of defence GOOG is mounting from this:
From docket 113-1:
—–
(pg. 2 )
After assuming responsibility for this case, Netlist’s current counsel began
reviewing hundreds of thousands of pages of information produced by Google and serving
written discovery directed to the structure and operation of Google’s accused 4-Rank
FBDIMMs. On September 10, 2009, Netlist served its second set of interrogatories and first set of requests for admission on Google. The requests for admission were directed to several pieces of infringement information, including the nature of the specific electrical signals received by the logic element (called an “advanced memory buffer†or “AMBâ€) of Google’s accused 4-Rank FBDIMMs.
Attached as Exhibit B hereto is a true and correct copy of Plaintiff Google Inc.’s Responses to Netlist’s Request for Admissions Set No. One [Nos. 1-26],†dated October 27, 2009 (“Google Response to Netlist’s RFAsâ€). As set forth therein, Google stated that it could not respond to Requests Nos. 6, 11, 13, 14, 16, 17, 19, 21, 23, 24, 25, and 26 concerning its own 4-Rank FBDIMMs because it “lack[ed] sufficient information.†Id. at 8, 11-15 (Request Nos. 6, 11, 13, 14-17, 19, 21, 23-26)
6. Attached as Exhibit C hereto is a true and correct copy of “Plaintiff Google
Inc.’s Responses to Netlist’s Interrogatories, Set No. Two [Nos. 6-9],†dated October 27, 2009. As stated at page 5 therein, Google identified Robert Sprinkle as an employee who is knowledgeable about the structure and operation of the accused 4-Rank FBDIMMs and gave the following answer as to why it contends that its 4-Rank FBDIMMs are allegedly non-infringing:
Google does not infringe any claim of the ‘386 patent because one or more
elements required to be present by each claim is missing from Google’s accused
products, both literally and under the doctrine of equivalents. For example, the
accused products do not include a structure that meets the “logic elementâ€
limitation because they nowhere include the functionality that is claimed in that
limitation.
7. On November 8, 2009, Netlist accepted Google’s offer to produce Mr.
Sprinkle for deposition on January 8, 2010. Google also insisted that Netlist simultaneously
depose Mr. Sprinkle in his individual capacity and under Rule 30(b)(6) concerning the
“structure, function and operation of the accused Google instrumentalities†on the agreed-upon date. Attached as Exhibit D is a true and correct copy of an e-mail from me to Shelly K. Mack of Fish & Richardson as well as Ms. Mack’s November 15, 2009 response thereto concerning the foregoing.
8. On December 4, 2009, Netlist filed a patent infringement lawsuit (Case No.
09-05718-SBA) against Google alleging infringement of U.S. Patent No. 7,619,912 (the
“’912 Patentâ€), which had just been issued by the United States Patent & Trademark Office on November 17, 2009. The ‘912 Patent is a continuation of the ‘386 Patent that is at issue in this case. In the present action, Netlist filed an Administrative Motion to Consider Whether Cases Should Be Related and Consolidated under Civil Local Rule 3-12 on December 17, 2009 (Document 83). Along with the motion, Netlist filed a joint stipulation signed by both parties requesting consolidation of this case and Case No. 09-05718-SBA. After the Court related but did not consolidate the cases, the parties filed a Joint Motion to Consolidate Cases, on January 6, 2010. Document 85. The Court denied the consolidation motion on February 3, 2010. Document 95.
9. On December 29, 2009, Netlist contacted Google and informed Google that in
the Court’s order relating this case and Case No. 09-05718-SBA the Court had not ordered
consolidation of the cases. Netlist also requested that Google identify the topics to which Google’s 30(b)(6) witnesses would testify, and in particular, those to which Mr. Sprinkle would testify on January 8, 2010. In response, Google informed Netlist that it would not produce any witnesses for deposition until the Court ruled on the parties’ request for consolidation. Attached as Exhibit E hereto is a true and correct copy of an e-mail from Google’s counsel, Shelly Mack, to me dated January 4, 2010 stating the foregoing.
10. In view of Google’s actions, Netlist re-served its Rule 30(b)(6) deposition
notice and Mr. Sprinkle’s deposition notice on January 6, 2010. The notices set the date for Mr. Sprinkle’s deposition on February 4, 2010 and for the corporation under Rule 30(b)(6) on February 5, 2010. Attached as Exhibit F hereto is a true and correct copy of a letter from Lauren Gibbs of Pruetz Law Group to Shelly Mack of Fish & Richardson, dated January 6, 2010, along with the deposition notices for Robert Sprinkle and for the corporation under Rule 30(b)(6) which accompanied Ms. Gibbs’ letter.
11. In late January, the King & Spalding firm replaced Fish & Richardson as
Google’s counsel of record in this lawsuit. Mr. Sprinkle was not produced for deposition on
the previously noticed date of February 4, 2010. Instead, he was produced on February 18, 2010. Mr. Sprinkle was designated to testify to topics 1(a-m), 2, 6, 8, 9, 10, 11, 13, 16, 17, and 18 of Netlist’s Rule 30(b)(6) Notice, a copy of which is attached as Exhibit G hereto. In addition, Google produced Andrew Dorsey to testify to topics 3, 4, and 12 and to a portion of topic 5 on March 11, 2010. Attached hereto as Exhibit H is an e-mail from Scott Weingaertner of King & Spalding, dated February 14, 2010 specifying Google’s designations of Rule 30(b)(6) witnesses. Although Norm Haus is identified on Exhibit H, Google ultimately produced Andrew Dorsey in his place on March 18, 2010.
12. Through the 30(b)(6) testimony of Google, Netlist was able to learn several
key pieces of information that inform and serve as the basis for the proposed infringement
contention amendments. I took both depositions. Because Google designated the transcripts
as “Confidential- Attorney’s Eyes Only,†they are not being submitted herewith to avoid the
necessity to file them under seal. The corporate testimony that Netlist was finally able to
obtain from Google during February and March 2010 included the following:
• the identity of the different 4-Rank FBDIMMs supplied by Google and
their part numbers;
• the specific serial signal protocol used by Google’s “logic elementâ€
component of the accused 4-Rank FBDIMMs (called an “Advanced Memory Buffer†or
“AMBâ€) and the manner in which the logic element is informed about the rank to which
command and address signals are to be directed;
• the maximum number of memory ranks to which control and command
signals received by Google’s 4-Rank FBDIMMs may correspond;
• Google’s use of infringing eight gigabyte (“8GBâ€) 4-Rank FBDIMMs
and 2GB 4-Rank FBDIMMs in addition to 4GB 4-Rank FBDIMMs;
• the manner in which Google’s AMBs generate output command signals
such as row address strobe (“RASâ€) signals, column address strobe (“CASâ€) signals, and
write enable (“WEâ€) signals to a selected rank of memory to execute DRAM commands such
as read, write, refresh, precharge, etc.;
• the specific AMB part numbers and suppliers used by Google;
• the identity of the contract manufacturers who have assembled 4-Rank
FBDIMMs for Google;
• Google’s receipt of a letter from Netlist to JEDEC in January 2008
which identified the ‘386 Patent and its relationship to the JEDEC AMB Quad Rank Support
Standard that Google admits to practicing and Google’s actions in response to the Netlist
letter;
• Google’s admission that the AMB is a form of an application specific
integrated circuit (ASIC);
• Google’s use of edge connectors on its 4-Rank FBDIMMs to connect
the modules to memory slots in its servers.
—–
You get the general impression that GOOG is not too eager to present their own employees’ side of the case (generally indicative of a fear that early discovery will only harm GOOG case) – this is especially odd given this case was brought by GOOG to protect itself against injunction (if NLST went to court to stop GOOG servers):
From docket 113:
—–
(pg. 5 )
Netlist also sought to obtain deposition testimony from Google concerning the
structure and operation of its accused 4-Rank FBDIMMs. Google identified Robert Sprinkle as “a person employed by Google who is knowledgeable about the structure and operation of the accused products.†Id. at 5. Google offered Mr. Sprinkle for deposition on January 8, 2010, and Netlist accepted. Hansen Dec. at ¶ 7, Exh. D. Google also insisted that Netlist simultaneously depose Mr. Sprinkle in his capacity as a fact witness and a Rule 30(b)(6) witness. Id.
In early December, Netlist filed a related lawsuit against Google (Case No. 09-05718-
SBA) alleging infringement of U.S. Patent No. 7,619,912, which had just issued on
November 17, 2009. Hansen Dec. at ¶ 8. Following the filing of the lawsuit, the parties jointly requested consolidation of this lawsuit and the ‘912 Patent lawsuit, which the Court ultimately denied on February 3, 2009. Hansen Dec. at ¶ 8. During the pendency of the request for consolidation, Google refused to produce any witnesses for deposition, including Rob Sprinkle:
Because the Court has not yet ruled as to whether two suits between the parties
will be consolidated, Google will not be presenting Mr. Sprinkle for deposition
on January 8th, and will not be presenting any witnesses for deposition (whether
in individual or 30(b)(6) capacities) until the scope of the case is clarified and, if
the cases are consolidated, a new schedule is entered. Fact discovery remains
open until the end of March, and there is no looming deadline that makes it
important for Netlist to take Mr. Sprinkle’s deposition in early January.
Hansen Dec. at ¶ 9, Exh. E (emphasis added). On January 6, 2010, Netlist again noticed Mr. Sprinkle’s fact and Rule 30(b)(6) depositions, this time for February 4-5, 2010. Hansen Dec. at ¶ 10, Exh. F.
In late January, Google replaced its counsel in this lawsuit, and Mr. Sprinkle was not
produced for deposition on February 4 or 5. Hansen Dec. at ¶ 11. Instead, he was ultimately produced on February 18, 2010. Hansen Dec. at ¶ 11. On March 11, 2010, an additional Rule 30(b)(6) witness, Andrew Dorsey, provided testimony about the manner in which Google induces the infringement of the ‘386 Patent by supplying contract manufacturers with components and directions for assembling 4-Rank FBDIMMs. Hansen Dec. at ¶ 11, Exh. H. Google designated both Mr. Sprinkle and Mr. Dorsey’s depositions as “Confidential – Attorney’s Eyes Only†under the Court’s Protective Order, contending that the information provided by the witnesses was not publicly available. Hansen Dec. at ¶ 13.
—–
You can conclude how clueful GOOG’s defence is about their own actions within GOOG from the above statement:
—–
quote:
As set forth therein, Google stated that it could not respond to Requests Nos. 6, 11, 13, 14, 16, 17, 19, 21, 23, 24, 25, and 26 concerning its own 4-Rank FBDIMMs because it “lack[ed] sufficient information.†Id. at 8, 11-15 (Request Nos. 6, 11, 13, 14-17, 19, 21, 23-26)
—–
And this statement:
—–
quote:
As stated at page 5 therein, Google identified Robert Sprinkle as an employee who is knowledgeable about the structure and operation of the accused 4-Rank FBDIMMs and gave the following answer as to why it contends that its 4-Rank FBDIMMs are allegedly non-infringing:
Google does not infringe any claim of the ‘386 patent because one or more
elements required to be present by each claim is missing from Google’s accused
products, both literally and under the doctrine of equivalents. For example, the
accused products do not include a structure that meets the “logic elementâ€
limitation because they nowhere include the functionality that is claimed in that
limitation.
—–
From above-mentioned Exhibit C (GOOG response to NLST questions or “requests”) you can see the aspects that GOOG is avoiding answering at this time – giving a few of the responses that NLST claims (see above) GOOG was unable to answer below. In most of them, GOOG says it “lacks sufficient information” to answer questions about it’s own product:
From docket 113-1:
—–
(pg. 26 )
Subject to, without waiving, and based upon the foregoing objections, Google responds as follows: Google admits that certain FBDIMMs used in certain of its servers follow the Mode C serial channel communication protocol set forth in the JEDEC standard for the respective DRAM used on the DIMM To the extent not admitted, Google lacks sufficient information to admit or deny this Request. Google reserve the right to supplement or amend its response at an appropriate time.
—–
(pg. 29 )
REQUEST FOR ADMISSION NO. 11:
In certain of Google’s serves, at least one Google AMB receives bits (“Google’s AMB Input Bits”) from the server’s memory controller.
RESPONSE TO REQUEST FOR ADMISSION NO. 11:
Google incorporates by reference each of the General Objections. Google further objects to this Request as vague and ambiguous as to at least the terms “Google AMB,” “receives” and “memory controller.” Google further specifically objects to this Request on the basis of General Objection No. 2, above, concerning the “bit” terms.
Subject to, without waiving, and based upon the foregoing objections, Google responds as follows: Google lacks sufficient knowledge or information to admit or deny this Request at this time. Google reserves the right to supplement its response at an appropriate time.
—–
(pg. 10 )
REQUEST FOR ADMISSION NO. 13:
In certain of Google’s servers, at least one Google AMB receives DRAM Address Bits from the server’s memory controller.
RESPONSE TO REQUEST FOR ADMISSION NO. 13:
Google incorporates by reference each of the General Objections. Google further objects to this Request as vague and ambiguous as to at least the terms “Google AMB,” “Address Bits” and “memory controller.” Google further specifically objects to this Request on the basis of General Objection No. 2, above, concerning the “bit” terms.
Subject to, without waiving, and based upon the foregoing objections, Google responds as follows: Google lacks sufficient knowledge and information to admit or deny this Request at this time. Google reserves the right to supplement its response at an appropriate time.
—–
In Exhibit C we have GOOG responding to second set of questions by NLST. Here NLST asks reasons for why GOOG did not admit certain things in first set of questions:
From docket 113-1:
—-
(pg. 46 ) – Exhibit C
INTERROGATORY NO. 9:
For each request for admission that Ooogle did not admit in Netlist’s First Set of Reques4 for Admission of Plaintiff Google, inc., served September 10, 2009, please explain why Google did not admit the request, and identify all documents that support the basis for Google’s response to the request and persons with knowledge of the basis for Google’s response to the request.
RESPONSE TO INTERROGATORY NO. 9:
Google incorporates each of the foregoing General Objections as if set forth fully in response to this interrogatory. Google further objects to this interrogatory to the extent it calls for information protected by the attorney-client privilege, the work product doctrine or any other applicable exemption from discovery. Google further objects to this interrogatory as over broad, unduly burdensome, and duplicative to the extent it requests Google to re-state information that it has previously provided, or is concurrently providing, elsew here.
Subject to and without waiving the foregoing objections, Google responds as follows: Google’s responses and objections to Netlist’s First Set of Requests for Admission are fully compliant with the requirements of Federal Rule 36, and as such, those responses and objections adequately disclose the reasons for Google’s denials and partial denials. Google incorporates those responses and objections here by reference.
—-
Recently NLST has expanded the scope of charges against GOOG. GOOG has not accepted this expansion (because only one week remains before close of fact discovery – from e-mail dated Mar 25, 2010 – pg. 99 of docket 113-1).
So if GOOG needs time, we may see court delay end of discovery (maybe). Or they may deny NLST expansion of charges because it is expanding the complexity of the case.
It is interesting how GOOG delayed Sprinkle testimony (awaiting consolidation which didn’t happen) and now is saying there is not enough time for discovery.
It seems NLST has been deposing (taking testimony from) GOOG AMB (buffer chip) suppliers – pg. 103 of docket 113-1 (Exhibit N) refers to IDT and NEC:
—-
quote:
Netlist also just deposed Google’s AMB supplies, IDT and NEC, this week, and will useÂ
the information from those depositions to support its ‘912 Patent Infringement Contentions. Thus, any allegedÂ
“delayâ€Â will ultimately inure to Google’s benefit in the form of more detailed and specific infringementÂ
contentions.Â
—-
In addition GOOG was set to depose JEDEC on March 30.
Regarding JEDEC, we see that JEDEC did circulate a letter to members about NLST’s contention that the JEDEC “Mode C” standard was infringing NLST IP:
From docket 113-1
—-
pg. 44 (Exhibit C – pg. 6)
INTERROGATORY NO. 7:
State the date on which Google first became aware of the ‘386 patent, the patent application that issued as the ‘386 Patent, any patent application to which the ‘386 Patent claims
priority, and/or any Nelist patent application disclosing and/or claiming memory density multiplication, memory rank decoding, and/or memory rank multiplication; describe the circumstances leading to such first awareness, including the identity of the person(s) involved, the identity of all documents which refer or relate to such first awareness, and/or the circumstances leading to such first awareness.
RESPONSE TO INTERROGATORY NO. 7:
Google incorporates each of the foregoing General Objections as it set forth fully in response to this interrogatory. Google further objects to this Interrogatory to the extent it calls for information protected by the attorney-client privilege, the work product doctrine, or any other applicable exemption from discovery. Google objects to this Interrogatory as calling for the production of information that is neither relevant nor likely to lead to the discovery of admissible evidence to the extent it requests information cuncemir^lg patents other than the ‘386 patent in suit. Google will respond concerning the patent in suit only. Google further objects to this Interrogatory as vague and ambiguous as to at least the terms “memory density multiplication,” ‘memory rank decoding,” and “memory rank multbplicabon.” Google further objects to ibis Interrogatory as over broad and unduly burdensome to the extent it would require an investigation into the aforcmenboned irrelevant patents conceding vague and ambiguous subject matter, which have no bearing on this case.
Subject to and without waiving the foregoing objections, Google responds as follows: Google was first made aware oftbe ‘386 patent in suit by an e-mail from Mr. Phileasher Tanner of JEDEC to various JEDEC mailing list recipients, including Mr. Rob Sprinkle and Mr. Andrew Swing of Google on or about Jan. 10, 2008, forwarding a Nelist patent disclosure letter concerning the patent. This e-mail, and the attached letter, were produced by Google in this matter as GNET034096-97 and GNET269919-20.
—-
Hi netlist,
It’s interesting to see how the pre-courtroom drama is playing out amongst Google and Netlist. Thank you very much for the updates.
NLST yesterday got qualified by SuperMicro (SMCI).
So this is now the first third-party validation of NLST technology.
For a good overview of what this means, check out these threads:
http://messages.finance.yahoo.com/Business_%26_Finance/Investments/Stocks_%28A_to_Z%29/Stocks_N/threadview?bn=51443&tid=15147&mid=15147
Here’s why Supermicro deal is a BIG DEAL
http://messages.finance.yahoo.com/Business_%26_Finance/Investments/Stocks_%28A_to_Z%29/Stocks_N/threadview?bn=51443&tid=15172&mid=15172
Putting numbers on the Supermicro Deal
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=12961&mid=15178&tof=10&frt=2#15178
Inphi seems to have had a delay
quote:
NLST yesterday got qualified by SuperMicro (SMCI).
So this is now the first third-party validation of NLST technology.
http://finance.yahoo.com/news/Supermicro-Qualifies-Netlists-prnews-2550444882.html?x=0&.v=1
Supermicro Qualifies Netlist’s HyperCloud Memory on High-Density Servers
Netlist’s HyperCloud and Supermicro Optimize Server Utilization
Press Release Source: Netlist, Inc. On Monday April 12, 2010, 6:00 am EDT
…
“With Netlist’s HyperCloud memory, our servers empower customers to improve their productivity and to support memory-intensive applications such as cloud computing and virtualization,” said Wally Liaw, vice president of sales at Supermicro. “HyperCloud helps us to uniquely position our high memory footprint servers with unprecedented levels of performance in these growth markets.”
By optimizing server utilization, HyperCloud improves datacenter economics associated with memory intensive, high performance computing applications and workloads, including virtualization, cloud computing, online transaction processing, video services and storage. Servers in these datacenters are typically underutilized due to memory bandwidth and capacity bottlenecks. Improving performance while lowering operating and capital expenses in datacenters, increases utility out of new and existing servers.
…
netlist
Thanks for the information. I am just becoming familiar with the legal side of the technical field, so it is taking me time to wrap my mind around the information that you supplied. Correct me if I’ve got it wrong.
I gather that Google is using 4 rank memory addressing in many of their servers but may have implemented it using their own ASIC and command codes, thus attempting to give Goggle protection against infringement claims. In my opinion, Mode C is Mode C no matter how it is implemented. Its almost certain that Google got the idea from Netlist and Netlist holds the patents protecting many elements to Mode C. The court will decide if that patent is broad enough to protect against superficial changes.
Netlist had a chance to completely optimized the codes claimed as its IP. Often there is only one optimized code for any CPU operation such as memory operations. Even if Google wins by using a different code, it may be second rate code (less efficient) to Netlist’s IP. That efficiency is hard to make up through ASIC design since the ASIC is subservient to the CPU. This could mean that Google looses memory access speed because of inferior code. Moving Tera byte upon Tera byte of data over time using less efficient code adds up to real money. Even if Netlist looses their claim because of a slightly different code, competitors may have to either use Netlist’s code as written and respect IP, or put out an inferior product using inferior code.
It looks like Google stands a chance of shooting itself in the foot twice, since its giving Netlist an opportunity to see exactly how it implemented its flavor of Mode C, and may be delaying or preventing the use of a more eloquent technology from Netlist.
GOOG was informed (via letter from JEDEC to it’s member companies – of which GOOG was one) of the conflict pointed out by NLST in the JEDEC “Mode C” proposed standard.
Yet GOOG continued to be blase about it and continue contracting NEC and IDT to make the AMB (buffer chips) for use on the memory modules and explictly going out and manufacturing or having these things manufactured and then going out and using them in it’s servers.
That is a high degree of complicity.
In recent filings (April 14, 2010), GOOG is claiming it answered “don’t know” to many questions about the buffer chips because they don’t know what they do – that is an odd assertion given that it is GOOG which is CONTRACTING with these subcontractors to deliver them these buffer chips and to manufacture the stuff.
From docket 117:
(pg 3. )
Google has been forthcoming about its limited knowledge and understanding of the accused 4-Rank FBDIMM products.
…
(pg. 5 )
Google had (and continues to have) insufficient knowledge because it neither designed nor manufactured the components at issue—the AMBs, which are being accused as the “logic element†of the asserted claims.
The hardware that GOOG had manufactured seems to have been following the JEDEC proposed standard or somewhat close to that. NLST has taken testimony from NEC and IDT (AMB buffer subcontractors for GOOG) regarding the design of the AMB and how it relates to JEDEC proposed standard.
As long as GOOG is using “Mode C”, it is a telling indicator that it was continuing to pursue a path (that JEDEC itself is wary of and had warned it’s members of) of violation of NLST IP.
quote:
In recent filings (April 14, 2010), GOOG is claiming it answered “don’t know” to many questions about the buffer chips because they don’t know what they do – that is an odd assertion given that it is GOOG which is CONTRACTING with these subcontractors to deliver them these buffer chips and to manufacture the stuff.
Also while GOOG maybe entitled to claim this WHILE claiming NLST IP is not related to GOOG, I would think it weakens GOOG’s argument attacking NLST IP if they are not clear about the technology that they ARE using.
If they don’t know what they are using, how can they claim non-relatedness to NLST IP ?
Since GOOG vs. NLST is GOOG attempt to deflect imminent injunction closing GOOG servers (if NLST requested it after complaining to GOOG sometime back), then there IS a burden on them to show evidence that distinguishes their technology from the IP claimant (NLST).
One could argue the burden of proof is on NLST – however we are talking about a hush hush internal manufacture of hardware by GOOG, and some degree of cooperation IS required by GOOG. What NLST can show is circumstantial evidence that on the face of it GOOG IS violating NLST IP – that has been shown in discovery where GOOG server turned out to be using “Mode C”, and GOOG has accepted that it DOES use “Mode C”.
The question naturally arises that even if GOOG doesn’t know WHAT it’s doing – it’s just following JEDEC proposed standard and telling subcontractors to “just use that”, where was the caution when JEDEC informed members that proposed standard was “iffy” because of potential violation of NLST IP.
Instead GOOG continued doing what it was doing, and it was not until NLST asked questions of GOOG directly that GOOG went directly to court (did not even know what to answer directly to NLST) in order to “manage” the process of (possibly) eventual concession to NLST (in an orderly court environment).
In any case, this undermines the thesis that GOOG has great IP in this area (like MetaRAM – even though MetaRAM also conceded to NLST).
Basically GOOG is relying on JEDEC proposed standard or such stuff – after all when it contracts with NEC and IDT to make buffer chips but leaves it up to them – they must be following some standard or method – likely the JEDEC proposed standard (of which GOOG was a part).
If so, it boils down to JEDEC proposed standard vs. NLST dictating GOOG fortune in this case.
“Don’t know”
Well that certainly is an interesting defense. GOOG must be squirming pretty bad to come up with that one. It may have worked for Ronald Reagan in the Iran Contra trial “I don’t recall” but ignorance is not an excuse for braking the law in the case Patent law.
It almost seems while claiming stupidity they are also try to blame the companies they contracted to manufacture for them. If I was the judge this kind of crap in the court room would piss me off.
Has this approach actually ever worked in a case like this that anyone is aware of?
Why is GOOG delaying a settlement?
They don’t really think “don’t know” carries merit?
Seems like the kind of defense the guy that has all the money and knows he is wrong uses in an attempt to bleed out the smaller guy.
quote:
Has this approach actually ever worked in a case like this that anyone is aware of?
Why is GOOG delaying a settlement?
They don’t really think “don’t know†carries merit?
Seems like the kind of defense the guy that has all the money and knows he is wrong uses in an attempt to bleed out the smaller guy.
Recently NLST expanded the charges against GOOG – as outlined in this post above:
https://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-263652
Docket 117 is the GOOG argument for why new charges should not be admitted now.
As part of that argument, GOOG attempts to answer why their “don’t know” answers were not really a delaying tactic – so that explanation (i.e. the subcontractors did it) is part of that argument.
I was highlighting that because it reflects badly on GOOG’s overall case – it pisses off their subcontractors, it feigns ignorance of GOOG’s role in JEDEC proposed standard. And it undermines GOOG’s determination as a cognizant party – in effect it gives the impression they were juvenile in temperament (which undermines their credibility in fighting an IP case against NLST).
To be fair to GOOG, docket 117 is not really their “defence”, but are actually the arguments they are making to prevent NLST from expanding the charges against GOOG.
this isn’t actually their defence (i.e. “don’t know”).
Netlist – thank you as usual!
If GOOG and NLST are scheduled for a settlement conference on the 30th why is GOOG still submitting filings to the court. Is this normal? Are they still trying to argue the case?
Settlement conference is routine – doesn’t mean much.
Meanwhile GOOG has to fight whatever it can – notice that NLST is piling on charges on GOOG, so GOOG has to have rebuttals. So that is what we are seeing. That doesn’t mean that settlement cannot happen. Court business still has to be dealt with on a day to day basis.
The only way Google would have a chance of getting around Mode C is to custom build a non JEDEC server that would have different addressing protocols. That would add extra cost to servers and memory modules. If Google is not using the inexpensive 2 gig memory chips that Netlist’s IP enables it to use, Google would come up short . There would be obstacles concerning addressing, speed, power and thermal management. Google approach makes no since unless the new design delivers a quantum leap in over all performance per dollar spent. There is no substitute for “on board memory” when it comes to servers. I think the high price of custom built, non-JEDEC servers and memory modules would only make sense if Google is going into the server manufacturing business. Potential server purchasers would be leery of hitching their wagons to a competitor using non standard components, because Google would control much of the IP. I wonder what is Google’s end game?
I recall that Google prided itself on desinging its own specialized hardware when it was smaller, and more nimble. Focus can be easlily lost with hyper growth. An error only becomes a mistake if its not corrected. Netlist may prove that small, and nimlbe is still best for focused goals.
quote:
I recall that Google prided itself on desinging its own specialized hardware when it was smaller, and more nimble. Focus can be easlily lost with hyper growth. An error only becomes a mistake if its not corrected. Netlist may prove that small, and nimlbe is still best for focused goals.
Google’s “custom hardware” was essentially use of generic hardware in ways that could be scaled. The search problem was designed to be scalable, and they tried to make the hardware side scalable – based on many cheap servers which could be scaled as needed.
It is in GOOG’s interest if memory technology becomes standardized – memory costs go down.
In the absence of a good solution – and with JEDEC proposed standard – GOOG tried to make early what was coming down the road later. However they went failed to account for the legal ownership of the IP – their behavior was consistent with their beginnings as a academic type of organization. However it ignores their reality as a technology behemoth of a company.
Within GOOG, their hardware division got involved in the memory project – so a bit of internal engineering momentum (and because they are a consumer of their own hardware division) may have blindsided their legal department (?).
However, I am thinking that if there WAS internal momentum, it should have dissipated by now – and the move out of Fish & Richarson (#2 in IP litigation) to King & Spaulding (#2 in ARBITRATION, but not in top 30 in IP litigation) seems like a very deliberate move – and may have come from the very top of GOOG (after a meeting perhaps “ok, guys what do we do”).
So probably GOOG NOW knows what they have to do – it’s just that the legal team do what is best to achieve good balance if there ever are settlement discussions.
NLST had already offered RAND terms to JEDEC. It seems the clear way forward is to have GOOG pay some penatly (or some future business – SMCI qualification of NLST probably goes a long way towards strengthening NLST position as having something tangible to offer – a working replacement for the stuff GOOG is doing currently).
And to have JEDEC get good terms for it’s member memory module makers – for JEDEC proposed standard.
However it would seem that a mamory module maker would be more interested in getting the NLST HyperCloud IP license than the JEDEC proposed standard (which may require BIOS changes).
Or at least they would want NLST IP for the short-term, with JEDEC proposed standard for later (when BIOS updated boards become the norm) – maybe they have to pay less licensing fees with that.
Total speculation of course.
There may have been a misjudgement by the hardward division of Google. I have heard the term “patent troll” used by Google in th past. I don’t know if they used that term refering to Netlist directly. The fact that Netlist has a certified product using the contested IP blowns that notion clean out of the water. Given, that the company has a tendency to use such references gives insight to elements of Google’s enternal culture. I speculate that the only way that this gets t
I speculate that the only way this gets to court is that
Google’s councel thinks that it has a much better than an 50% chance of winning. A lost could be a public relations desaster. Meanwhile, discovery will buy time.
In my opion, it would be amazing if Google had no knowledge of infringing technology because they left it up to the individual suppliers to determine how to designed memory modules for Googles use. That could result in a hodge-podge of different specs, and performance charicteristics. Any company making that kind of investment would want to have tight control of what they are getting for their moneny.
Makes sense.
Recently NLST expanded the charges against GOOG – as outlined in this post above:
https://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-263652
GOOG opposes claiming they “didn’t know” how their memory module components operate and were not delaying etc.
In response NLST files a pretty interesting read on events.
—-
Given the foregoing, Google seeks to confuse the Court by arguing that Netlist “has had detailed knowledge of AMBs for years†based upon its participation in the industry’s standard setting process and analyses of third-party products. Opposition p. 3:1-15. Google then contrasts Netlist’s expertise with its own professed ignorance as to the function of AMBs. Opposition p. 3:16-17 (“In contract to Netlist’s detailed knowledge of AMBs, Google, as an acquirer of 4-rank FBDIMMs and the AMBs they include, has only limited knowledge of the productsâ€).
In doing so, Google argues that Netlist somehow lacked diligence in amending its infringement contentions because Netlist would be more knowledgeable than Google on the function of AMBs in the computer memory industry and could have sought the information from
other third parties earlier.
Google’s argument is a red-herring; Netlist’s knowledge of AMBs in the computer
memory industry in general is irrelevant. Indeed, if the information Netlist sought was generally
known in the industry, Google would presumably not have designated the deposition transcripts
of its Rule 30(b)(6) witnesses as “Confidential-Attorney’s Eyes Only.†Hansen Dec. ¶ 12.
Netlist needed and sought information on how Google used the third party AMBs it purchased
and whether and how its use of the AMBs conformed to JEDEC standards.
The proposed amended infringement contentions concern specific information regarding Google’s infringing use of AMBs with its 4-Rank FBDIMMs. This information includes details such as the number and type of “memory devices†(i.e., DRAM chips), the physical connection between the FBDIMMs and a computer server, the physical connections between the AMBs and the DRAM chips and the number of ranks in which the DRAMs are configured, the types of 4-Rank FBDIMMs used by Google (2GB, 4GB, 8GB), and the specific manner in which the AMBs operate when used by Google, including the types of commands they perform and how they perform them. Google delayed producing this information until the February and March 2010 depositions of its Rule 30(b)(6) witnesses. While some of the information may have been included in the hundreds of thousands of pages of documents that Google produced, the Rule 30(b)(6) witnesses were critical in navigating those documents and establishing their relationship to the structure and operation of the accused products.
For example, when the current infringement contentions were created, Netlist only knew that Google was using 4 Gigabyte 4-rank FBDIMMs using rank multiplication technology, infringing upon claims 1 and 11 of the patent-in-suit. However, upon taking the deposition of Mr. Sprinkle on February 18, 2009, Netlist learned that Google’s AMB is a form of an application specific integrated circuit (ASIC), as recited in claim 5 of the ‘386 Patent. At Mr. Sprinkle’s deposition, Netlist also learned that claim 9 of the ‘386 patent was implicated by the specific manner in which Google’s AMB is informed about the rank to which command and address signals are to be directed. Hansen Dec., ¶ 12.
—-
And some information on how the GOOG server was made available for inspection – and the flip-flopping by GOOG on “Mode C” being used:
From docket 133 (133-main):
—-
(pg. 5 )
For example, Google sought to block Netlist from inspecting a Google server to obtain information and photographs of the server and its 4-Rank FBDIMMs. The information and exhibits were eventually used to examine Google’s witnesses.
In an effort to convince the Magistrate Judge to deny Netlist’s inspection request, Google admitted that its 4-Rank FBDIMMs operated in accordance with “JEDEC Mode C,†an infringing mode of operation. Joint Letter of the Parties to Magistrate Judge Spero, dated May 19, 2009 at 5 (See Declaration of Steven R. Hansen in support of Netlist’s Reply Brief, dated April 20, 2010 (“Hansen Reply Dec.â€) at ¶ 2, Exh. A).
Magistrate Judge Spero eventually ordered the inspection to go forward. See May 29, 2009 Order Granting Defendant’s Request For Production No. 12 For Inspection Of A Functioning Google Server [Docket No. 31]). Further, in October 2009, Google admitted in responses to Requests for Admission that it uses “Mode C.†Google’s Response to Netlist’s RFAs at 6-7 (Hansen Dec., ¶ 5, Exh. B). However, on the evening of the last day of discovery, Google “supplemented†its responses to deny using Mode C:
May 19, 2009 Letter to
Magistrate Judge Spero
(page 5) (Document 27;
Hansen Reply Dec., ¶ 2, Exh.
A) (original emphasis)
“Google does not dispute that
its FBDIMMs operate in
Mode C . . . .â€
October 27, 2009 Response
to Netlist’s Request for
Admission No. 3 (Hansen
Dec., ¶ 5 Exh. B)
“Google admits that certain
FBDIMMs used in certain of
its servers follow the Mode C
serial channel
communication protocol set
forth in the JEDEC standard
for the respective DRAM
used on the DIMM.â€
March 30, 2010
Supplemental Response to
Netlist’s Request for
Admission No. 3 (Hansen
Reply Dec. ¶ 3, Exh. B)
“Google lacks sufficient
knowledge and information
to admit or deny this Request
and therefore denies it.â€
—-
And on GOOG’s “lack of knowledge”:
From docket 133 (133-main):
—-
(pg. 7 )
While Google protests that it “lacks knowledge†of how its own 4-Rank FBDIMMs are configured and operate, it is undisputed that when Google’s 30(b)(6) witnesses were finally deposed (the protracted history of Google’s failure and refusal to timely produce its witnesses its detailed in Netlist’s moving papers), Mssrs. Sprinkle and Dorsey had plenty of non-public knowledge regarding them. Hansen Dec., ¶ 12. Thus, Google’s suggestion that “Google’s lack of knowledge undermines rather than advances Netlist’s position†is false. Opposition p. 5.
Google’s effort to hide behind its purported “lack of knowledge†is nothing more than a tactic used to avoid meeting its discovery obligations.
As such, Google delayed in revealing highly-relevant information by obstructing inspection of its server, delaying the production of its 30(b)(6) witnesses, and denying requests for admission based upon lack of knowledge when in fact its witnesses had extensive knowledge of the subject. After going to such great lengths to delay disclosing relevant information, Google cannot now complain about the timeliness of Netlist’s request to amend its infringement contentions; Google itself is responsible for any delay in the production of the information necessitating the proposed amendments.
—-
It seems NLST is claiming GOOG manufactured “hundreds of thousands of computer memory modules”.
I am not sure if it is that much – but maybe NLST has information on how common these modules are in current GOOG setup.
Also contains an explanation of “Mode C” (and NLST IP).
From docket 133-2 (which outlines earlier arguments – which eventually led the court to refuse GOOG arguments and grant NLST request for examining GOOG server):
—-
(pg. 2 )
This is a patent infringement case. Netlist owns IP relating to computer memory modules, and it shared some of those inventions under NDA with Google while Netlist’s patents were pending. Google turned down a business relationship with Netlist. Netlist alleges that Google then went on to manufacture hundreds of thousands of computer memory modules using the Netlist technology, and that it now uses those memory modules in server computers at Google data centers.1
…
One of Netlist’s production requests was for an allegedly infringing Google server. Google has refused to produce one. In its Request for Production No. 12, Netlist requested that Google produce a server containing the Accused Devices, including all software, firmware, and/or “register-setting code†used in the operation of the Accused Devices. The purpose of this request is to allow Netlist to verify that the FBDIMMs used in Google servers function as described in the ‘386 patent.
In particular, for Google’s FBDIMMs to be infringing, they must be capable of being set to run in what the industry standard-setting body, JEDEC, refers to as “Mode Câ€. In general terms, Mode C fools the computer into thinking that it is accessing two sets of memory chips, when in fact its access requests are split among four less-expensive sets of memory chips. This yields tremendous cost and energy savings. When an infringing server is turned on, it sets the appropriate registers for Mode C, and it can then report that it has the amount of memory contained on the memory module in those four sets of chips (e.g., 4 gigabytes).
—-
And GOOG’s answer there includes acceptance of “Mode C”.
From docket 133-2:
—-
(pg. 6 )
Netlist can determine FBDIMM operation from the code Google has agreed to make available, including the code that controls Mode C operation, and FBDIMM configuration can be determined from the design files and the product itself. Google does not dispute that its FBDIMMs operate in Mode C and the code it has agreed to produce is the code relating to Mode C operation.
—-
From docket 133-2 (NLST letter to GOOG – more on “Mode C”):
—-
(pg. 16 )
Netlist Requested that Google produce a server so that it can test-among other things-Google’s FBDIMM functionality. The registers controlling the memory module running in what JEDEC refers to as “Mode C” are set by and on the CPU, not the FBDIMM. This is a central infringement issue, and theeefore simply producing a memory module and code is insufficient. Similarly, because Netlist argues that Google induces infringement by providing services using the infringing servers, Netlist needs an operational server rather than a memory module in order to prove up it’s case.
—-
GOOG fought production of server tooth and nail – all the while trying to placate with production of FBDIMMs, their circuit board plans and code, but to avoid embroiling the servers.
Eventually the court ordered GOOG to produce the server (which is what led to discovery against GOOG, and helped expand claims against GOOG).
Netlist,
I have to admit the “settlement conference” is a confusing subject to me. I was of the impression that if it were to be ordered by the court that, it was probably do to the fact that the Judge saw a definite violation by one side. Now if I understand it correctly what it is essentially doing is forcing the sides to declare what they are looking for in a post trial settlement? Am I correct?
Obviously I have not read all the documents (only what you have been kind enough to share), but this seem pretty one sided. What does GOOG hope to gain by continuing to delay or going to trial? Are they simply trying to bleed the cash out of NLST?
In most cases with jury trials, the court forces the parties to a “settlement conference” with some arbitrator/judge in order to give them the chance to settle early i.e. out of court settlement – thus saving the court the expense and bother of going to jury trial if the matter can be dismissed/resolved early.
However if the parties are not amenable to settling this early, the settlement conference usually produces no result.
That is why I have noted that the settlement conference date is interesting but not necessarily means anything.
http://en.wikipedia.org/wiki/Settlement_conference
quote:
Matters discussed in a settlement conference are confidential, and cannot be introduced as evidence in court. All such information would be considered privileged or hearsay. There is one exception to this rule: statements of fact made by criminal defendants in settlement discussions over disputed civil claims asserted by government agencies are admissible in the criminal case.
netlist,
Has Netlist given any indication of what they are looking for in the settlement? It seems that the assertion that Google has installed “hundreds of thousands” of mode C infringing modules signals that Netlist believes that Google has committed tremendous damage. How often are agreements reached during Settlement conferences in cases similar to this one?
I have not found indication of what NLST is looking in settlement.
Though CEO Hong has signalled that:
From Q4 2009 CC (Feb 18, 2010):
quote:
cannot disclose litigation .. vigorously protect
not in litigation business .. will seek reasonable settlements
Well based on this information. I can’t see why both sides shouldn’t be able to establish an understanding for settlement. As it stands now the only ones gaining are the lawyer’s (unless there is something we don’t know yet).
Either way – should be interesting.
Thanks for all the insight.. Great board!
Netlist,
It seems in charioteer with the leadership of Netlist to be looking forward, as evident by the development of their recent products. The protection of their IP could be worth more than a punitive sized settlement. Google would look hypocritical if they reject a reasonable settlement in which the ownership of IP is the major issue, when it complains about the lack of respect for its own IP in China. Thanks for all of the useful information that you have supplied.
My best guess is that Google assumed that because Mode C is an industry standard, they were free to use it without infringing any patents. Netlist had notified Google, as well as the concerned industry standard body that Mode C may infringe their intellectual property. Although they had a letter from NetList, they took the view that NetList is a patent troll.
Using other people’s patents even after you have been told about them seems evil to me.
I think this case will cause a lot of changes in the way Google looks at Patents. Recent news reports that they have patented various parts of their architecture are significant. Open source clones of their architecture may now be infringing their patents. Although Google as per its policy has no plans of suing others of infringement, except as a defense.
Could the lack of news out of the settlement conference indicate both parties are looking for a timely solution? The Supermicro computer server model SYS-6026T-NTR+-GS015 with 288 gig of HyperCloud memory demonstrated at Interop 2010 is good timing. Wonder how it stacks up against Google’s Icarus server?
Don’t know – settlement conference may not mean anything as is mandatory by the court. Their settlement dynamics maybe moving at their own pace, with settlement conference just an official date to be ignored or to go to with eyes rolling.
As expected, nothing happened on the April 30, 2010 “mandatory settlement conference” between GOOG and NLST in front of Judge Laporte for GOOG vs. NLST.
Outcome was “did not settle” (docket 136 in GOOG vs. NLST).
Parties are required to attend such settlement conferences in order to give every opportunity to avoid the time and resource expenditure by the court for jury trials.
However the parties may have their own timeline for when they want to settle – so these become a formality.
NLST had recently asked the court for permission to expand the claims against GOOG.
Judge Armstrong (Docket 134 in GOOG vs. NLST) denied permission to expand the case.
NLST claimed that since GOOG had delayed in discovery process, NLST got testimony from GOOG employees at a late stage and that helped them craft additional claims against GOOG.
Judge Armstrong has in the past denied things which might slow GOOG vs. NLST – the joint GOOG/NLST request to consolidate GOOG vs. NLST and NLST vs. GOOG was denied earlier for similar reasons. It would have slowed GOOG vs. NLST down which was in an advanced state.
Given that precedent, it was not surprising that Judge Armstrong denied expansion of claims as it would disrupt the existing GOOG vs. NLST case.
I assume this still leaves NLST vs. GOOG (which has a later timeline) open for addition of those claims by NLST.
However as indicated in the past, if GOOG/NLST settle early, it will include all cases (and may even include some JEDEC licensing of NLST IP for the JEDEC “Mode C” proposed standard). So the outcome of the NLST vs. GOOG maybe moot, as it will probably become part of whatever settlement in GOOG vs. NLST.
This outcome was suggested in an earlier post “it would keep things on track”:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=15757&mid=15843&tof=1&frt=2#15843
Re: vicl2010v scammed you all
quote:
—-
Other than that there is a May 4 (today) hearing on NLST’s extra claims against GOOG.
If approved they would add to the burden against GOOG. If Judge Armstrong disapproves the extra claims, it would keep things on track.
—-
Judge Armstrong has earlier also said the parties should try to settle the case “sooner rather than later” – when she earlier denied GOOG/NLST joint request to consolidate GOOG vs. NLST and NLST vs. GOOG.
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=14993&mid=15027&tof=1&frt=2#15027
Re: A Lot Of 100 Lots Going Through
(Caveat: I have not read the original lawsuit or other papers. On the other hand, I have read this thread and the associated hyper linked material in detail. )
Extent of damages depends on extent of infringement, and willfulness of infringement.
Extent of infringement will depend on how many servers Mode C was used on.If it was in use in hundreds of thousands of server the damages will be more.
This case is unusual- Google apparently had an ASIC manufactured for memory expansion that is compliant with JEDEC Mode C. They had been told that Mode C was NetList IP. The JEDEC standards body had also been told about it. They continued to use Mode C- they decided that NetList was a patent “troll”. Google cannot say that the legal department knew about the NetList claim, but not the hardware engineering group. This defense if allowed would make mockery of the legal system. Any company would be free to argue that they are just a large company where the left hand does not know that the right hand is infringing.
Willful infringement penalty is 3 times the damages. The judge has discretion to increase or decrease the damages. Subsequent Google conduct such as refusing to admit or deny infringement, delaying the discovery process will not be looked at favorably. Even saying that NetList is a patent troll may not be looked at favorably at the stage of calculating damages. Would Google say the same about an established large company in high tech? As it happens, there is only one other company that can give the same amount of memory as the upstart Netlist.
I am hoping that Google is forced to settle for hundreds of millions of dollars, and then is forced to buy from NetList for 3 to 7 years. Many companies which pride themselves on being ethical have a policy when settling law suits. They will settle a lawsuit in which they are guilty of wrong doing. Instead of the boiler plate- “Without admitting wrongdoing, we pay millions of dollars”- they will say- we admit our actions were wrong, and are taking steps – such as better decision making processes and training – to make sure this wrong doing does not happen again. Stealing patented ideas, willfully is “evil”.
Do you think that this case will make it to court? Sames that the cost of bad pr from an unfavoraable court outcome would be enough to motivate an early settlement by Google, not to mention the delaying the adoption of Netlist prouducts.
quote:
Do you think that this case will make it to court? Sames that the cost of bad pr from an unfavoraable court outcome would be enough to motivate an early settlement by Google, not to mention the delaying the adoption of Netlist prouducts.
Delaying adoption – as in delaying adoption of HyperCloud within GOOG.
Yes. Adopting HyperCloud within Goog is a possibility, depending on how Hypercloud technology stacks up to Googles memory modules. Can you get the specs for Goog’s modules? That information seems to be confidential. If the difference is great enough in Netlist favor, Goog would be a shrewd to adopt HyperCloud or some other Netlist IP, and lower its overall settlement and memory cost going forward. Having Goog as a customer for years to come would be a substanial victory for Netlist. Goog could also turn an embarressing situation into an example of how to manage mis-steps, while maintaining it’s gleeming coorperate image.
I don’t think the issue of whether GOOG’s memory is better is an issue. It probably doubles memory, but probably does not allow full speed operation.
The issue is: It probably infringes NetList Patent, and the infringement is perhaps willful- considering NetList had notified them.
GOOG cannot possibly say: “We lack sufficient knowledge” about whether we infringe. This defense will not be accepted during trial. It seems laughable for a tech giant, to say they do not know if something which they designed and manufactured infringes someone’s patents. Normally companies say: We do not infringe, as we work around the claims- or they say the patent is invalid. I have not seen a defence like this.
Comments/analysis of Netlists first quarter 2010 results going forward?
http://www.netlist.com/investors/investors.html
https://www.bizjournals.com/sanfrancisco/prnewswire/press_releases/California/2010/05/11/LA03022
Anyone:
1. Have close by multiple industry source market past, current, future projections for DC server potential growth 1-10 yrs out (i.e., demand pool)? As well: top 5 server consumers, penetration, region.
2. Visit Netlist at Interop 2010 Vegas and see the “Data Cnter in a Box” with thoughts/impressions?
Joeq: did you see that news article of ‘papers flung from a red peugot’? started a big fire back in the day…
Question: with regard to bandwidth, how well does it scale? What is the bandwidth for each DIMM (4GB, 8GB, and 16GB) and then total for a server (i.e., Supermicro etc.) partially or fully loaded utilizing eighteen, 16GB 2vRank HyperCloud DIMMs (288GB DRAM)?
Does anyone here have access to the settlement information concerning the Texas Instruments case that was just announced?
Thanks
NLST vs. TXN settled.
Problem is can’t get access to the court records – although if it is a settlement, that may not be in the court records.
But if someone can get access (goto court basically, since that court is not on PACER), that would be nice as it would shed light on the goings on at JEDEC and how TXN (being fingered as the ORIGINAL leaker of info to JEDEC) may have been involved.
http://www.prnewswire.com/news-releases/netlist-settles-lawsuit-with-texas-instruments-93915489.html
Netlist Settles Lawsuit With Texas Instruments
IRVINE, Calif., May 17 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that it has reached a settlement in the misappropriation of trade secrets and breach of contract lawsuit against Texas Instruments, Incorporated. The settlement resolves a dispute between the two companies concerning the use of proprietary memory modules and other related technology.
“We are pleased to have successfully resolved this case. Netlist remains committed to protecting its portfolio of intellectual property,” said C.K. Hong, President and CEO of Netlist.
Examining the language in the PR:
quote:
IRVINE, Calif., May 17 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that it has reached a settlement in the misappropriation of trade secrets and breach of contract lawsuit against Texas Instruments, Incorporated. The settlement resolves a dispute between the two companies concerning the use of proprietary memory modules and other related technology.
quote:
“We are pleased to have successfully resolved this case. Netlist remains committed to protecting its portfolio of intellectual property,” said C.K. Hong, President and CEO of Netlist.
Here we have CEO Hong issue the PR – possibly suggesting that NLST is in the drivers seat regarding who would put out the PR.
It has NLST speaking, and it does not have TXN representative speaking (if there was a TXN concession there would be nothing to tout about).
Plus the reiteration of defence of NLST IP.
Language suggests that this is to the satisfaction of both parties – and esp. NLST.
From court docket info, the settlement happened 5/10/2010.
Court-mandated settlement conference was set for 09/29/2010.
And jury trial for 10/4/2010.
See docket info for NLST vs. TXN:
http://www.sccaseinfo.org/pa6.asp?full_case_number=1-08-CV-127991
10/4/2010 08:45AM 01 CV Jury Trial – Long Cause Vacated; dismissal filed C 05/10/10 None None None
9/29/2010 01:30PM 01 CV Settlement Conf – Jury Vacated; dismissal filed C 05/10/10 02/10/10 None None
…
0038-000 Cv Ntc:Settlement 05/10/2010 None 05/11/2010 For: Netlist, Inc. / PLT
0037-000 Cv Req:Dismissal, Entire W/Prej 05/10/2010 None 05/11/2010 For: Netlist, Inc. / PLT
Against: Texas Instruments, Incorporated / DEF
So this was an early settlement – if TXN is quiet, this would suggest they concded something which is nothing to crow about. And if this is so, then an early settlement suggests TXN realized that delaying would not help TXN.
Note that like MetaRAM (which was making buffer chips, plus had some IP), TXN is also making some buffer chips (see above). But TXN is also accused of leaking NLST info to JEDEC.
What would a concession look like ? Would TXN concede IP (MetaRAM conceded IP to NLST, and promised to not let it’s IP be used against NLST) ? Or would TXN concede they will abandon buffer chip manufacture within TXN ?
What would TXN concede regarding leakage to JEDEC ?
What would be interesting is if TXN starts licensing NLST IP. Would be an indicator of start of fall of dominoes. Since TXN is a part of JEDEC – alleged by NLST to have leaked NLST IP to JEDEC – which may later have been used by Intel in that demo at JEDEC, which prompted NLST to inform JEDEC that this technology falls awry of NLST IP. Which in turn prompted the letter from JEDEC to members – including GOOG. Which GOOG in turn chose to ignore and continued manufacture until warned by NLST – in response to which GOOG went to court to prevent stoppage of it’s servers.
If TXN stops making buffer chips – that would impact negatively on Inphi (in NLST vs. Inphi) litigation as well.
We may or may not hear full details on the settlement. Settlement with MetaRAM involved a company in bankruptcy, while TXN has it’s reputation to protect as well.
Question is, if NLST is going to look the other way on TXN leakage to JEDEC, what is TXN going to promise in return ?
Just for comparison, here is a recap of the NLST vs. MetaRAM litigation settlement PR at time of settlement:
http://www.prnewswire.com/news-releases/netlist-announces-settlement-of-patent-infringement-lawsuits-with-metaram-82948382.html
Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
Press Release Source: Netlist, Inc. On Thursday January 28, 2010, 1:25 pm EST
quote:
Under the terms of the settlement, filed in U.S. District Courts in Delaware and Northern California, MetaRAM will not sell, offer to sell, release, or commercialize the MetaRAM DDR3 controllers in the U.S. or outside the U.S. Netlist contended that MetaRAM’s DDR3 controllers and memory modules incorporating such controllers infringed its U.S. Patent No. 7,289,386, entitled “Memory Module Decoder.” A provision in the settlement protects Netlist if another company purchases MetaRAM’s patent and attempts to seek action against Netlist in the future.
“We are pleased to have successfully resolved this case,” said C.K. Hong, President and CEO of Netlist. “As the pioneer of this technology, the results of this settlement clearly underscore Netlist’s fundamental patent and product leadership. Netlist’s HyperCloud product-line embodies this foundational technology and Netlist remains committed to protecting its portfolio of intellectual property.”
Does this mean that TXN has licensed the technology to produce high memory chips?
In the upcoming generations of Intel processor the limitation on memory is reduced- a 4 socket Intel Xeon with 1024 GB of memory is in the labs of major server vendors.
This does not mean that there is no need for NetList or other memory expansion technology- it is just that the need for that technology is reduced.
quote:
Does this mean that TXN has licensed the technology to produce high memory chips?
In the upcoming generations of Intel processor the limitation on memory is reduced- a 4 socket Intel Xeon with 1024 GB of memory is in the labs of major server vendors.
Not clear WHAT TXN could offer to atone for leakage to JEDEC. It would seem like licensing would be one thing – would also be a signal to JEDEC.
Or maybe they agree to having leaked – which would be useful in NLST vs. JEDEC – which practically speaking impacts NLST vs. GOOG (since GOOG is using JEDEC “Mode C” proposed standard).
NLST has a first to market advantage which is available now – with buildout for cloud computing it is a good time to be offering this product.
However NLST HyperCloud has other advantages – notably being the advantage of using “lower dollar per bit” memory chips to emulate “higher dollar per bit” memory chips.
Eventually new technology will arrive – server motherboards will change – but there is an economic value to have memory solutions that work NOW – and with existing low-priced servers.
Mode C has a limited lifespan going forward. Netlist doesn’t look like a one trick pony. The fact that Netlist figured out how to increase the address range on current motherboards without bios changes is amazing, and Google and others thought it was useful. HyperCloud involves additional Netlist IP that should be very useful in designing memory modules for the next generation mother board. IP needed to manage cost, space, speed, energy, and thermal issues will out live the current Mode C requirement for expanded memory addressing. HyperCloud is a great prototype demonstrating how to engineer high capacity/performance modules even as the need for Mode C diminishes. Netlist is positioning itself to become a major industry player. They must be successful in protecting their IP and executing properly. It seems they were denied an opportunity to grow by Google’s rebuff. I would expect that the settlement would address that issue.
10-Q filed by NLST:
http://www.secinfo.com/d11MXs.rSe8.htm
On the NLST vs. TXN litigation settlement:
quote:
Trade Secret Claim
On November 18, 2008, the Company filed a claim for trade secret misappropriation against Texas Instruments (“TI”) in Santa Clara County Superior Court, based on TI’s disclosure of confidential Company materials to the JEDEC standard-setting body. On May 7, 2010, the parties entered into a settlement agreement. The court dismissed the case with prejudice.
As stated in previous post above:
quote:
From court docket info, the settlement happened 5/10/2010.
Court-mandated settlement conference was set for 09/29/2010.
And jury trial for 10/4/2010.
The 10-Q now reveals they had agreed to settle May 7, 2010.
Superficially, HyperCloud seems to offer the advantages of JEDEC “Mode C” proposed standard plus the plug and play and requiring no updates to BIOS.
In addition it brings with it the integrated advantages of the “embedded passives” and NLST’s thermal IP (even heating to reduce thermal disparity so memory modules perform within tighter tolerances).
From the recent 10-Q filed by NLST:
http://www.secinfo.com/d11MXs.rSe8.htm
A good explanation of HyperCloud – pointing out that the “no BIOS changes” results in having no impact on OEM’s product cycles:
quote:
Our HyperCloudâ„¢ products can be installed in servers without the need for a bios change. As such, their design and anticipated sales launch is not dependent on the design plans or product cycle of our OEM customers. Alternatively, when developing custom modules for an equipment product launch, we engage with our OEM customers from the earliest stages of new product definition, providing us unique insight into their full range of system architecture and performance requirements. This close collaboration has also allowed us to develop a significant level of systems expertise. We leverage a portfolio of proprietary technologies and design techniques, including efficient planar design, alternative packaging techniques and custom semiconductor logic, to deliver memory subsystems with high speed, capacity and signal integrity, small form factor, attractive thermal characteristics and low cost per bit.
“Superficially, HyperCloud seems to offer the advantages of JEDEC “Mode C†proposed standard plus the plug and play and requiring no updates to BIOS.”
Jedec Mode C is a Netlist invention. That is the crux of the lawsuit.
quote:
Jedec Mode C is a Netlist invention. That is the crux of the lawsuit.
Yes, basically that it violates NLST IP. However that does not mean that it is better than HyperCloud. In fact HyperCloud – as qualified by SuperMicro (SMCI) – is a finished product which includes the advantages of JEDEC “Mode C” proposed standard PLUS the advantage of plug and play and no BIOS updates required.
What will be the hint before there is a big settlement?
Will there be a big settlement?
http://www.google.com/ventures/images/imagestrip_locations.jpg
Before building an ASIC may be Google should see if they have the green light to do so. 🙂
Don’t know.
Hi MemoryGeek,
It’s hard to say if there will be a settlement, or if we’ll get some kind of hint beforehand if there is one.
Many lawsuits do settle, and many often on the courthouse steps in the moments before a trial.
NLST has asked court for “summary judgement” in GOOG vs. NLST (case GOOG brought to protect it’s servers from being shutdown etc.) on the basis of some exhibits, mainly testimony from JEDEC attorney and GOOG employee.
http://en.wikipedia.org/wiki/Summary_jud…
Summary judgment
Easy to understand explanation of “summary judgement”:
http://answers.yahoo.com/question/index?qid=20071028194145AAxtvtV
quote:
Best Answer – Chosen by Asker
A summary judgment is a decision by a judge that decides the case early because there are no facts in dispute. The judge’s decision means that the case never goes to trial.
To ask for a summary judgment from a judge, you must do the following:
1. File a Motion for Summary Judgment asking the judge to rule in your favor. It must be filed pretty soon after discovery is complete.
2. In the Motion for Summary Judgment, you must submit case law and facts that support your Motion for SUmmary Judgment.
3. Generally, you won’t win a summary judgment motion unless there are NO facts in dispute – meaning the only issue outstanding is an issue of law.
For example, let’s say you and I are neighbors. Your trees were blocking my view – so I cut them down. You sue me to get the trees replaced. Both of us agree that this is what happened (that the facts are not in dispute).
Since we agree on the facts, the only outstanding issue is what the law says.
Why have a jury trial when juries ONLY decide facts – not law. Judges decide the law; therefore, the above case is RIPE for a decision.
The law states that I don’t have the right to trespass on your property and damage your property; therefore, if you file a motion for summary judgment the judge will find in your favor.
http://en.wikipedia.org/wiki/Jury_trial
Jury trial
A jury trial (or trial by jury) is a legal proceeding in which a jury either makes a decision or makes findings of fact which are then applied by a judge. It is distinguished from a bench trial, in which a judge or panel of judges make all decisions.
…
Juries usually weigh the evidence and testimony to determine questions of fact, while judges usually rule on questions of law, …
Another explanation:
http://www.legalandlit.ca/summaries/first/civpro/civpro_farrow_w07.doc
14 Stages of a Lawsuit -CHECKLIST
quote:
7. Disposition Without Trial – most cases don’t get to trial (only 1-3 percent get to trial)
4 different possibilities:
1. negotiated settlement – the most common resolution
mediated settlement – mediation is assisted negotiation with the assistance of a third party – a mediator helps facilitate communicate b/w the parties – there is now a rule requiring mandatory mediation – reduces costs and helps achieve a resolution
2. motion for judgment – if a party has made admissions through the oral examination for discovery process which entitle the opponent to succeed, you can move for judgment b/c the evidence sworn under oath on discovery entitles a win w/o a trial
3. moving for summary judgment – when one party in an action can demonstrate to the court that there is no triable issue in the case – the difference w/ motion for judgment is that there is no possible evidence to defeat your claim
4. striking a pleading by using R 25.11
5. default judgment – the D has failed to deliver a statement of defence – if you don’t respond to a statement of claim you are deemed to admit the allegations in the statement of claim – difficult motion to win b/c the court is being asked to do something quite significant, which is an ex parte (one party) – elements of the action still need to be proven, ie. Serving affidavit, damages, et
WHEN IS R20 USED (anytime before trial)
o if after discovery, you look at their evidence and conclude that they cannot back up their pleading, so a Rule 20 motion allows for early adjudication to have matter resolved
o Efficient on party resources and time and judicial resources
o It avoids a trial or shortens the proceeding on satisfying a court that there is no need for a trial because there is no genuine issue of fact requiring one
Thanks, netlist.
Thanks a fairly thorough explanation of a summary judgment.
If I were to simplify it, I might say that a motion for summary judgment is a ruling on the law involved in a case when there aren’t any facts left to dispute and argue in front of a jury, and only a legal interpretation of the law involved is necessary for a decision on a case.
Yes, basically it seems a jury trial is to establish “the truth” (or facts).
While the judge usually rules on the facts – or in some cases the jury rules on the facts as well as the judgement to varying degrees.
The suggestion being that once the facts are clear, then the jury is no longer required, and one side may ask the judge that “discovery has unveiled facts which are not in dispute any longer”. NLST cites JEDEC lawyer testimony and GOOG’s employee Robert Sprinkle (although some of their testimony is blacked out because it is confidential – attorney’s eyes only).
Good points.
Many motions for summary judgment fail because there are still some facts in dispute that haven’t been uncovered in discovery to the satisfaction of the judge deciding the motion, but it’s often worth filing a motion for summary judgment to avoid the expense of a trial, and the time that it might take to have that trial.
Do you know if a summary judgement request usually leads to the judge giving time to the other party to settle ?
That is, if the judge sees things are “not looking good” they can say that in the conferences they have with the two sides’ lawyers, and urge them to settle.
In such a case, GOOG would be hard pressed to settle fast – since with that climate they would be willing to pay MORE than what a judge might order. Since with a judge’s order, GOOG not only has to pay, but also suffers:
– security of GOOG servers in question (could harm GOOG share price)
– GOOG may have to use NLST products ANYWAY if they are the only ones available in this area
– would not help “do no evil” mantra (and GOOG’s entire business model of intrusive “big brother-like” behavior is tempered by the perception that they “are good”)
– would provide ammunition to competitors to use GOOG prior history of subterfuge
So in fact settling is a significant advantage for GOOG – I don’t know if that means that there has to be a premium there for that therefore.
I am not a patent lawyer. But usually in patent law cases, the defendant claims that they have not infringed on the patents. This has not been done by Google. They seem to say that they do not know if they have infringed. This would be somewhat acceptable if they had done some lab experiments. But they have actually gone and hired vendors to construct the ASIC. Therefore, they have infringed- and since they had notice through various means, the infringement may be considered willful.
Hi Netlist,
Usually, an opposing party has time to file a response to a motion for summary judgment, and sometimes the possibility of asking for more time in some instances. Courts do tend to like the possibility that a case might settle before reaching a trial as long as it doesn’t appear that a party is attempting to delay only to make a case drag on.
Hello Netlist,
Is it probable that Google is having the “Mode C” memory modules manufactured and installed while litigation goes on?
quote:
Is it probable that Google is having the “Mode C†memory modules manufactured and installed while litigation goes on?
There has been no indication that GOOG has stopped what it was doing.
GOOG has indicated that the GOOG server they provided to NLST lawyers was representative of the infringing servers.
That does not indicate:
– if they are continuing to do so.
– if they are a minority or a significant portion (10% ?) of the current GOOG server set.
– if these servers are part of GOOG Caffeine project (GOOG’s effort at near real-time search which is probably even more reliant on in-memory techniques, thus requiring more memory than earlier servers). GOOG Caffeine has recently gone mainstream.
It seems NLST is claiming GOOG manufactured “hundreds of thousands of computer memory modules” – as earlier posted:
https://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-267873
quote:
——–
From docket 133-2 (which outlines earlier arguments – which eventually led the court to refuse GOOG arguments and grant NLST request for examining GOOG server):
—-
(pg. 2 )
This is a patent infringement case. Netlist owns IP relating to computer memory modules, and it shared some of those inventions under NDA with Google while Netlist’s patents were pending. Google turned down a business relationship with Netlist. Netlist alleges that Google then went on to manufacture hundreds of thousands of computer memory modules using the Netlist technology, and that it now uses those memory modules in server computers at Google data centers.1
——–
While NLST says GOOG maybe using this memory in “hundreds of thousands of servers”, I wonder if GOOG actually uses so much memory.
Some articles have suggested GOOG used to buy bargain basement priced memory.
But maybe the move to Caffeine and scaling have pushed GOOG to use higher-memory computers.
After all, GOOG WAS interested enough to start it’s own memory module division.
But it still remains unclear if all GOOG servers run at max memory loading or just a few.
However, the effects of memory loading maybe apparent at reasonably sized memory levels as well (i.e. don’t have to load to 384GB but maybe apparent as low as 24GB or something ?) and if so availability of HyperCloud-like solutions impacts many more of GOOG servers.
Google Caffeine seems to include these things as well:
– a rewrite of the GFS (Google File System 2 or GFS2)
– doing more stuff “in-memory” (i.e. RAM) including databases all in memory etc. (what is not clear is what percentage of it’s servers would be involved with higher memory use applications – however GOOG interest in making and using it’s own memory may suggest some interest from GOOG in this direction)
NLST CEO Hong has mentioned all these uses for NLST HyperCloud:
– government labs with HPC (High Performance Computing) applications (like Viglen caters to which recently qualified HyperCloud though it may have been because of SuperMicro qualifying – since Viglen sells SuperMicro (SMCI) stuff).
– search applications needing to do things more in memory (RAM)
– database applications where the whole database is in memory (RAM)
– video delivery – supposedly huge user of memory
It is possible GFS2 itself shifts stuff to in-memory use:
http://www.channelregister.co.uk/2009/08/12/google_file_system_part_deux/
or
http://www.channelregister.co.uk/2009/08/12/google_file_system_part_deux/print.html
Google File System II: Dawn of the Multiplying Master Nodes
A sequel two years in the making
By Cade Metz in San Francisco
Posted in Enterprise, 12th August 2009 02:12 GMT
quote:
But GFS supports some applications better than others. Designed for batch-oriented applications such as web crawling and indexing, it’s all wrong for applications like Gmail or YouTube, meant to serve data to the world’s population in near real-time.
“High sustained bandwidth is more important than low latency,” read the original GPS research paper. “Most of our target applications place a premium on processing data in bulk at a high rate, while few have stringent response-time requirements for an individual read and write.” But this has changed over the past ten years – to say the least – and though Google has worked to build its public-facing apps so that they minimize the shortcomings of GFS, Quinlan and company are now building a new file system from scratch.
…
GFS dovetails well with MapReduce, Google’s distributed data-crunching platform. But it seems that Google has jumped through more than a few hoops to build BigTable, its (near) real-time distributed database. And nowadays, BigTable is taking more of the load.
“Our user base has definitely migrated from being a MapReduce-based world to more of an interactive world that relies on things such as BigTable. Gmail is an obvious example of that. Videos aren’t quite as bad where GFS is concerned because you get to stream data, meaning you can buffer. Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point.”
From the comments section:
http://forums.channelregister.co.uk/forum/1/2009/08/12/google_file_system_part_deux/
As for “other tools”; Lustre was invented as a local network filesystem. GFS was invented to handle thousands of tasks all reading & writing as fast as they could all day every day. The indexing pipeline; download the internet, index it, run a few mapreduces over it to mark down spammy sites, crappy sites, duplicate sites, dead sites etc. and then compress it so it could be shipped all over the place. As Sean says in his interview, these days ‘routine use’ is dozens of petabytes of data that has to be randomly accessed – as in, the metadata has to stay in RAM.
“Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point.”
Hi Netlist,
Google has had a long history of attempting to do as much with software as possible while using as many inexpensive computers as they can. The original Google File System focused upon that approach, and the GFS 2 system, which was supposedly developed in 2007-2009, worked hard to distribute the processes involved over more computers, as well as making more processes happen in memory rather than touching disks as much. So more memory on each machine might help.
Google is supposedly working on a GFS 3, but I haven’t heard too much in the way of details. I would suspect that it would still focus upon spreading out processes on as many inexpensive computers as possible, though.
The relevance for NLST investors however is to estimate the number of servers that maybe using high memory (sufficient that memory-loading causes slowdown – in which case the NLST HyperCloud or JEDEC “Mode C” proposed standard related memory modules that GOOG was manufacturing become useful).
If these are common needs for most of their servers (not just the main or central node ones) then there maybe “hundreds of thousands of servers” which are infringing NLST IP.
If they are only a few node servers, that number would be less.
However the fact that GOOG went to great lengths to manufacture the JEDEC “Mode C” proposed standard modules suggests they needed these crucially (and could not rely on JEDEC or the memory module makers to produce these on time). It also suggests that the need was not for a few servers alone.
I don’t know enough about GFS2 (or GFS3 as you suggest) – but the suggestion is that more is being done in-memory (either indexing or database type stuff is in-memory). The question (of interest to NLST investors) is however still how many servers have these high-memory requirements.
Thanks for your comments.
The problem complicating the above analysis is that as GOOG moves to GFS2, does that distributing out of the main nodes’ work to many servers entail using much more memory per server still ?
Or is the move part of a strategy that knows it CAN increase memory per server (earlier node servers may already have been running at high memory) – which is what allows some of the faster (in-memory type) methods to work.
Or are the servers still able to get by with measly amounts of memory – since the tasks they do do not require that much (per server) with the new GFS2 structure.
Hi Netlist,
Part of the change to GFS 2 involves smaller file sizes as well, which should help reduce the amount of memory required per server.
See: Google File System II: Dawn of the Multiplying Master Nodes
A snippet:
Docket 166-4 (GOOG vs. NLST) has a few pages from the transcript of the claims construction conference held Nov 12, 2009 after which court overturned GOOG’s reading of the claims construction. The court also disallowed GOOG’s request to include the “patent prosecution history” for NLST patents at the USPTO.
Here is a more complete selection from the transcript.
Listen as GOOG tries to argue (unsuccessfully) for their version of the language to describe the situation.
As many who have read the court docs can see as well – the judge also sees that GOOG is trying to use overexpansive language to actually change what the problem is – from something that infringes NLST IP to something that is more general or separate (i.e. doesn’t look like NLST IP).
This comment by judge is relevant:
quote:
asked you, ’cause what i’m — what I am perceiving is — is an overbreadth in terms of incorporating in the construction of the phrase itself the reason or the motivation for wanting to have the phrase included in the first place.
GOOG’s claims construction arguments (and it’s demand for patent prosecution history for NLST patents be included) were discarded by the court (Nov 12, 2009 hearing on claims construction etc.).
Note: Pollack is attorney for GOOG. Pruetz for NLST.
From docket 166-4 (GOOG vs. NLST):
—-
(pg. 3 – pg. “165” of transcript)
.. the computer system that the memory module has a certain thing; in this case, a number of ranks.
Then the computer system generates and transmits a set of output control systems corresponding to the two ranks. It’s told by the SDP that it has one rank. It generates control signals that correpond to the one rank, and then the logic device control generates signals that correspond to the two ranks that are actually there.
The Court: Okay. Where are you looking at “generating” ?
Mr. Pollack: Okay. So for example, if the —
The Court: Where is the generation language that you just been reading ?
Mr. Pollack: The logic element receives —
The Court: Right.
Mr. Pollack: — the set of input control signals corresponding to the signal rank from the computer systemn’s memory controller.
The Court: I know. But then you’lre using the word “generating” —
Mr. Pollack: Oh, and then I was reading what the modules does with them. The logic device takes those and generates the ones that match up with the actual number of ranks, which is two.
In each case, where the “corresponding to” language
—-
—-
(pg. 3 – pg. “166” of transcript)
comes in, it’s preceded by the discussion of the SPD device characterizing to the computer system the different number of devices.
So when — when the claim says that the signals received from the computer system correspond to this second number of devices, it’s because the computer system’s been instructed that those — that that’s what’s there. And that’s why you need a logic device. The whole point of the logic device would go away, as you pointed out earlier, if the computer systemn knew exactly how many devices were on there.
The Court: But are we — in construing the phrase, are we — are we making judgements as to why as opposed to what ? I mean, you’re saying it corresponds to a — it’s because of this that it corresponds to.
In terms of the construction of the phrase itself, why would we build into the construction of what the phrase means “corresponding to” the reason why the — it has determined that it’s appropriate to “correspond to” ?
Mr. Pollack: Well, what it means for the — the signals, right – the signals are comin gfrom the computer system, right ? The corresponding language is characterizing those signals.
The Court: But —
Mr. Pollack: Right ?
The Court: You don’t understand the question that I
—-
—-
(pg. 3 – pg. “167” of transcript)
asked you, ’cause what i’m — what I am perceiving is — is an overbreadth in terms of incorporating in the construction of the phrase itself the reason or the motivation for wanting to have the phrase included in the first place.
And — and my concern is that why is that appropriate to — to — to incorporate in the definition of what the phrase means the reason why the phrase is there and the reason why there’s a — been a decision to — to make it in that way.
Mr. Pollack: Because it — I don’t see it as incorporating the reason. It’s the why —
The Court: You said “it’s because.”
Mr. Pollack: The — the characterization — the claim characterizes the signals as corresponding to something.
The Court: Yeah, but what do we care whether the computer understands it or not, as long as it corresponds ?
Mr. Pollack: Well, the only way it can correspond —
The Court: Well I don’t know whether that’s true or not.
Mr. Pollack: The —
The Court: I’m trying to understand —
(simultaneous colloquy)
The Court: — the fact of the correspondence. Now you’re saying that’s the only way that the fact of the correspondence can occur. I don’t know whether it can or not. But we can certainly accomodate the fact of the corresponding.
—-
(pg. 3 – pg. “168” of transcript)
We can define what “to correspond” means.
Now you might be right that it will only be actualized if the computer understands whatever, or maybe not.
But — but we will always be able to determine whether or not the correspondence is transpiring, right ?
Mr. Pollack: If the signals that are comin from the computer system —
The Court: You don’t know how to answer the questions “yes” or “no.”
Mr. Pollack: We can —
The Court: Well, then why don’t you just answer the question ? I mean, because it doesn’t do much for your credibility when you just side-step questions that obviously you can answer and you choose not to. I ask the questions, and you give the answers. And then after that, if we have time to discuss what you want to discuss, we’ll discuss.
I’m asking questions. I’m trying to advance this discussion and narrow down to what really is the essence of the dispute here.
Mr. Pollack: I apologize, your honor. I’m not sure I understood your question. That’s why I was tryin gto rephrase it to — to understand what you meant.
The Court: Okay, then tell me that next time.
Mr. Pollack: If what you mean is that you can tell — that the signals themselves correspond to a thing based ..
—-
(pg. 4 – pg. “185” of transcript)
Mr. Pollack: The Court —
The Court: — What you are asking me — What you’ve just proposed, you know, “matching up” — the language you proposed for the two words that you agree are the only ones that you all dispute means exactly the same thing as the two words that are here. I mean, I understand that you all know the context and you have a lot of other — a lot of other issues that you’re concerned about.
But from my part, listening to you all and listening to the dispute, as you characterize it, and listening to your proposals as you’re proposing to me, the language that you are suggestin to substitute in it’s — instead of the two words that you dispute mean exactly the same thing that these two words mean. And so I can only conclude that there’s no substantive dispute.
Mr. Pollack: Actually, your honor, I’m sorry. I wasn’t just suggesting that you just replace “corresponding to” with “match up” —
The Court: Well, you said “conform”. “Conform” means the same thing as “corresponding to.”
Mr. Pollack: I was actually attempting to modify netlist’s proposed construction in that —
The Court: What ?
Mr. Pollack: — That’s — What I was — when we were talking about before —
The Court: Okay, so what language did you suggest to — to substitute the words — because we’re — we’re down to “corresponding to”.
Mr. Pollack: Right. And I suggested that where they — if we said that the control —
The Court: No, what —
(simultaneous colloquy)
The Court: Excuse me. What — what words are you suggesting that I substitute for “corresponding to” ?
Mr. Pollack: “Are configured to use.”
The Court: So you’re saying, “the set of input control signals are configured to use.”
Mr. Pollack: A second number of devices.
The Court: I’m — no that doesn’t — no. Okay. I’m not going to construe this term. I don’t think it requires construction based on my understanding of what you all are disputing.
And — and to suggest that control signals, given your — given your — the construction of what a signal is, is configuring control signals is just nonsensical to me.
Mr. Pollack: It — it would mean that you choose — the set of signals are chosen, are configured, are designed, to match up to — to the second number of devices.
Ms. Pruetz: Your hobor, I just don’t think there’s
—-
—-
(pg. 4 – pg. “187” of transcript)
anyway that you can define “signal” to be configured as he’s —
(off-the-record discussion)
Mr. Pollack: And it’s a set of signals.
The Court: What’s your definition ?
(off-the-record discussion)
Mr. Pollack: I’ve got it. We’ve construed it as “varying electrical impulse that conveys information from one point to another.”
The Court: Excuse me ?
Mr. Pollack: “A signal is a varying electrical impulse that conveys information from one point to another.”
The Court: Right. And so you’re suggesting configuring —
Mr. Pollack: A set of signals may be configured.
Ms. Pruetz: Oh, I mean, that — that is really a discussion for some expets, but I can’t quite imagine using “configured” that way.
The Court: And I don’t — it doesn’t — well, it may be — it may be something that is possible, but it’s not readily apparently to me. And given the way this — this proposed phrase reads, it’s certainly a lot clearer in the phrase as it’s presented now than it would be if I included that language in that fashion.
(off-the-record discussion)
The Court: Okay. So the last one ins the — well, I
—-
—-
(pg. 4 – pg. “188” of transcript)
guess this — probably has some bearing on the last one. “The first command signal corresponding to the second number of ranks.” And the only difference here is three words by Google “generated by” — or no, “generated by two” command — and then netlist says “received from”, which is configured to utilize.
Okay. Any — any comments ?
Ms. Pruetz: This is really the same situation we had before. I mean, it’s the first command sigal that’s corresponding to the second number of memory ranks, which is also the smaller number of memory ranks. It’s just using “ranks” instead of “devices”.
The Court: Yeah.
Ms. Pruetz: So I think our argument would be the same, that it’s clear the way it is.
The Court: Counsel ?
Mr. Pollack: Again, your honor, with the — with the understanding that the command sigals, the way they’re operated on is different. And what we’ve — that’s why we used “command” again to emphasize that what we’re doing here is the computer system’s commanding with these command signals a memory module having a second number of ranks.
That’s the — the command sigals also are intended and generated to work with the computer — the second number of ranks, which is different from the number of ranks that’s actually there.
—-
Thanks for sharing these parts of the transcript, netlist.
Lawsuits involving highly technical issues have to be hard on judges, who tend to be pretty well versed in legal issues, but may not have the technical background to understand high tech and the intricacies of things like memory modules. I know in some cases involved in technical issues like this, a court will appoint someone who has both a legal and technical background as a special master to explore those topics and present them to the court in a way that a judge and/or a jury might understand. It does sound like there’s some confusion here on the part of the judge, but it also sounds like he’s trying to be very careful to understand what’s being described.
netlist,
Can Goog drop the Goog/Netlst case at this point wheater or not the judge grants the summary judgement?
spencity
Are you asking somebody you picked a fight with to stop beating on you? HAHAHA!
I wouldn’t until you were dead.
Sorry Spencity,
That was a little harsh.
I don’t think you can walk away from a case you started, and hope its going to be okay.
You will loose!
quote:
Can Goog drop the Goog/Netlst case at this point wheater or not the judge grants the summary judgement?
I don’t know.
My guess is if you file a suit, you should be able to drop it.
Unless there is some complication with NLST being counter-claimant and GOOG as counter-defendant as well in GOOG vs. NLST.
One would think if GOOG was dropping the case, in most cases there would be some concession to be extracted from the other party – and then that would be a “settlement”.
That’s OK fallguy. My train of thought is that Google filed this case as the plaintiff in order to control the litigation process, and not because they have case. It may be possible that Google has an indirect advantage in delaying the out come of any litigation by keeping some of Netlist’s IP questionable to the industry as long as possible. Google could delay a clear outcome significantly if Netlist has to wait for the Netlit/Goog case to run its coarse. I would think that Netlist’s pricing power and industry adaptation of HyperCloud will be effected by the outcome of any settlement, and a delay of a settlement, may cause a hesitation by some customers accordingly. That would give Google more time to take advantage of their current lead in technology, which could be worth more than the cost of a settlement. Just playing the “what if game”.
There is still a possibility of a settlement, but at some point a settlement needs to be approved by the judge in the case, even if the judge’s input is just rubberstamping a settlement agreement.
I think you make a good point spencity, about Google having to carefully weigh the cost of continuing to pursue litigation, against what the cost of a settlement might be.
I remember roughly what the present Delaware Chancellor said involving a big civil law suit in Delaware’s Chancery Court from a year or so ago to both parties after their presentations were finished, where he told them that it would probably be at least 4 or 5 weeks before he came out with a ruling, and he urged them to continue considering a settlement in the case. He said something like – “usually most people don’t like my decisions – on both sides of a case.”
You can’t just drop a case you have started if the other party has counter claimed because their counter claim would still stand. In these cases though, a settlement is normally best for all concerned.
Hi Simon,
Good point. Just to amend that a little, you can drop your claim in a case even if the other party has filed a counter claim, but as you note, the case would continue to resolve the counterclaim.
Another Google/metaram patent application published today:
Apparatus and Method for Power Management of Memory Circuits by a System of a Component thereof
anybody still active on this board?
It appears everything is on hold until the USPTO review is completed. This is like watching grass grow. Oh wait! my grass actually grows a heck of a lot faster than this.
Hi Fallguy,
I’m still pretty active on this blog with new posts. Didn’t expect this post to grow to almost 280 comments.
I’m still puzzled why Google purchased Metaram in the first place, concerned about what might happen in the lawsuit, and what it’s implications may be.
I’m not an investor in any of the companies involved, but I’m still very much interested in the outcome.
The USPTO isn’t involved at this point, as far as I know.
Hi Bill,
Inphi has successfully filed a challenge with the USPTO regarding the 386 patent. GOOG file and was granted a stay in the GOOG vs NLST case pending determination by the UPSTO. I believe an stay was also filed and issued in the NLST vs GOOG case. I am pretty sure I had read the docs concerning this but don’t seem to be able to access them any more. For the life of me I don’t know why the court wouldn grant a stay in one case and not the other. They maybe separate cases but they are fighting over the same IP.
Now SMOD has challenged the 386 patent:
http://biz.yahoo.com/iw/101025/0677140.html?.v=1
Does anyone have any thoughts of this?
Cheers.
Hi Fallguy,
It’s interesting to see these challenges to Netlist coming out. I’m not quite sure what to make of both the Inphi and the SMOD actions, but I think I’ll be spending some time learning more if I can. How do those impact what Netlist is doing now? I’m not sure.
Thanks, netlist.
Some interesting details there, especially (to me) the section about a patent reexamination of the ‘912 patent requested from SMOD and Google, with a possible decision as to whether that reexamination will be granted or denied expected in January.
Q4 2010 earnings call transcript (not exact)
http://www.netlist.com/investors/investors.html
Netlist Fourth Quarter, Year-End Results Conference Call
Wednesday, March 2nd at 5:00 pm ET
http://viavid.net/dce.aspx?sid=00008211
Moderator – Matt Lawson (?) of Allen & Caron (NLST’s Investor Relations firm)
Chuck Hong – NLST CEO
Gail Sasaki – NLST CFO
Matt Lawson – Allen & Caron (Moderator):
…
Good afternoon Ladies and Gentlemen. Thank you all for joining us.
…
And with that I’d like to turn the call over to Chuck.
Good afternoon, Chuck.
at the 2:10 minute mark
Chuck Hong:
Good afternoon Matt. Thank you all for joining us to discuss the 2010 year end results and outlook for 2011.
As you saw from our release earlier today, we had another strong quarter with 51% growth in revenue over last year’s Q4.
And a year over year we more than doubled our revenues.
We also saw increases in gross profit – 236% growth year over year.
And 95% growth quarter over quarter.
And a sequential quarterly increase of 9% in GP (gross profit).
Much of the growth in the overall business we experienced last year came from our NetVault family of products and our baseline business – which is a combination of flash and other specialized memory modules for data centers and industrial applications.
We expect the volumes in these businesses to accelerate through this year as our products in this are continue to be well received by the customer base.
In addition to supporting this baseline growth operationally, we spent a great deal of time and resources last year working to bring HyperCloud to market.
We started with engineering prototypes at the beginning of 2010 and through the course of the year, in response to customer and partner feedback and requests, we implemented multiple revisions and refinements.
In this process, we worked closely with most of the major server OEMs, major storage OEMs, end-customers, DRAM and CPU suppliers, and motherboard manufacturers.
Each of these partners provided important feedback from their perspective to make HyperCloud not only a better performing product, but one that could achieve broad compatibiity with a wide breadth of technical requirements requested by each of our partners.
They represent a broad spectrum of the entire industry infrastructure.
Also in this process, there have been many cycles of product evaluation, technical feedback, product refinement.
And numerous testing cycles in a variety of server platforms and a concerned effort by NLST and our partners to make HyperCloud a more robust and highly reliable product that can withstand the stresses of the harsh data center environment.
All of this resulted in a longer than expected gestation cycle from prototype to mass production.
Through this process, our partners have remained very enthusiastic about the technology and the benefits they would eventually derive.
The partners have also remained patient, recognizing that HyperCloud chipset is inherently a complex product.
But they also recognized early on that the HyperCloud IP is both a short-term solution, as well a fundamental long-term solution to the growing problem of memory bottleneck in the data center space.
at the 5:15 minute mark
So they continue to provide detailed feedback on their individual requirements and what they would like to see in both the current and the next generations of HyperCloud.
The important point is that HyperCloud in it’s various configurations continues to successfully undergo testing today and is at various stages of evaluations at the OEMs and at end-customers.
And we expect to see completion of some of these testing in the coming months.
As a precursor to those events, most recently, 8GB and 16GB HyperCloud products passed extensive battery of certification tests and achieve independent industry certification from CMTL – Computer Memory Test Labs.
This achievement was a further validation of our interoperability on the current generation of Intel server motherboards.
On the end-user front, Red Bull Racing announced a 60% greater server utilization when running Formula 1 racing car simulations and computational fluid dynamics (CFD).
This press release underlined the performance benefits which are currently available with HyperCloud memory.
HyperCloud memory is also listed on the VMWare website as one of only two memory partners of VMWare.
As many of you know, as VMWare and other companies innovate and provide ways to increase server utilization at end-users, the need increases for memory performance within each virtual machine.
And our technology is a key enabler of that pathway.
HyperCloud is currently included in multiple system configurations for proof of concept testing at VMWare.
We mentioned on the last call that we had engaged major players, both in the OEM space and in the DRAM space, in order to extend the reach of HyperCloud.
Our goal here eventually is a formalized industry alliance of major server OEMs, channel partners and CPU and DRAM manufacturers, which results in a broad mainstream supply and use of HyperCloud.
This would build out the current network of HyperCloud partnership consisting of companies such as VMWare, SuperMicro and MSC Software.
It is difficult to determine an exact timeline of a broad industry alliance.
What we are working – day by day, one company at a time, in order to accelerate the adoption of HyperCloud as the defacto standard for high-capacity, high-performance memory.
HyperCloud is a technology, we believe, that has been designed for where the server is headed in the coming decade.
We are encouraged in the progress in all areas of our business this year and anticipate continued growth for each product family.
We see the revenue mix changing in 2011 in favour of our flagship products – HyperCloud and NetVault NV.
With reinforce our value of our intellectual property (IP) porfolio, and strengthens our competitive position.
But we also expect our flash and baseline business to grow on a steady ramp through the course of this year.
at the 8:40 minute mark
Since our last call, we announced the qualification of NetVault NV by Compellent Technologies for production shipments in the Compellent Enterprise Network Storage Solution.
Due to the broad-based market interest in the NetVault-NV technology, we have been in development and plan to introduce a new product platform utilizing our proprietary “Vault Controller” in the coming weeks.
We foresee a significant revenue increase in the flash-backed battery-free products in 2011.
On the R&D front, we continue to invest resources to complete the development of the generation 2 HyperCloud chipset.
This is designed to work with the next generation of server chipsets from Intel and AMD.
This is an important undertaking for us as we extend the benefits of HyperCloud technologies into higher speed, multi-core servers, running in excess of 2GHz clock speeds.
The next generation of HyperCloud will also consume less power.
We have recently started customer sampling of prototype parts of this generation 2 HyperCloud, well ahead of the OEM qualification cycle.
On the intellectual property (IP) front, we continue to make progress as we were recently awarded two patents protecting the company’s innovations that utilize rank-multiplication and load-reduction technologies.
One of these patents further extends the company’s intellectual property claims related to rank multiplication.
This technology, used in HyperCloud memory modules, enables the system’s ability to address more memory capacity, in a standard 2 processor server.
In addition, rank-multiplication technology provides HyperCloud the advantage of using the mainstream 2Gbit DRAM vs. the higher cost-per-bit 4Gbit DRAM, which was recently introduced for making the high-capacity 16GB 2-rank registered DIMMs for server memory.
Also on the IP front, many of you have an will note the recent actions by other players in the memory space to challenge our patent position related to a number of platforms, including HyperCloud.
While these processes will need to run their course, we are comfortable in our position and confident in the enforceability of our patents.
It is also interesting to see that more companies are attempting to use our IP, or challenge our ownership.
While we do not believe these efforts will succeed, we believe they are the result of a belated recognition that HyperCloud is the most optimal technology available to address the growing memory constraints in the data center server.
at the 11:20 minute mark
So in summary, we continue to position HyperCloud and NetVault as technology standards for the industry, while working to get these products to market, in order to monetize the IP which resides in them.
At the same time we continue to invest in the next generation of both product platforms.
Gail will now provide you a more detailed financial update on the Q4 and year-end results.
Gail ?
Gail Sasaki:
Thanks Chuck and good afternoon everyone.
As you saw on our release this afternoon, revenues for the Q4 ended Jan 2, 2011 were $10.1M, up 51% when compared to $6.7M for the Q4 ended Jan 2, 2010.
In the Q4 we continued to see overall growth in the sales of our memory modules, and the sales of our memory modules into application-specific servers for RAID and data center optimized applications.
Revenue for our NetVault family of products increased from prior year’s quarter by 65% and year-over-year by 168%.
The NetVault mix during the Q4 was a bit different than expected as it was weighted towards our battery-backed product.
That mix will reverse during the early part of 2011 toward the higher ASP, more robust feature set and battery-free version of NetVault (NetVault-NV) as our OEM partners’ marketing efforts take hold and they see improved order traction from their customers seeking the operating, ecological and economic advantages of that product.
HyperCloud sales, although still not in production volume, were associated with orders for proof-of-concept at end-user customer targets.
Sales of the more commodity-like RDIMM and industrial SO-DIMM products did come under some pressure in Q4 due to the greater supply of DRAM and subsequent decrease in pricing of the last couple of quarters.
at the 13:30 minute mark
Gross profit for the Q4 ended January 1, 2011 was $3.3M or 32.6% of revenues.
Compare to a gross profit of $1.7M or 25.1% of revenues for Q4 ended Jan 2, 2010.
The year-over-year gross profit dollars and margins improved due to the 105% increase in revenue as well as the increased absorption of manufacturing costs as we produced 88% more units than the year earlier quarter with no related increase in the cost of factory labor and overhead.
We are planning on a range between 25% to 30% for our gross profit percentage during 2011.
Which will be dependent on the product mix, DRAM cost and continued absorption of manufacturing cost in each quarter.
Net loss for the Q4 ended Jan 1, 2011 was $3.2M or a $0.13 loss per share.
Compared to a net loss in the prior period of $3.0M or a $0.15 loss per share.
These results include stock-based compensation in the Q4 of $261,000, compared with $257,000 in the prior year period.
at the 14:40 minute mark
And depreciation and amortization expense of $596,000 in the most recent quarter, compared with $557,000 in the year ago period.
Revenues for the year ended Jan 1, 2011 were $37.9M up 105% from revenus of $18.5M for the year prior.
Gross profit for the year ended Jan 1, 2011 was $9.9M or 26.3% of revenue, compared to gross profit of $3.0M or 16% of revenues for the prior year.
at the 15:20 minute mark
Average product ASPs (Average Selling Price ?) have increased by 136% from $22 to $52 year over year.
This increase is mainly due to the product mix trends.
With a planned mix change towards more NetVault-NV and HyperCloud, we anticipate further increases in the average ASPs throughout 2011.
Quarter over consecutive quarter we saw a 19% decrease in the average ASPs, partially due to the declines in DRAM pricing, but also due to the change in the product mix, as we discussed earlier towards our lower ASP based battery-backed product (NetVault-BB).
Net loss for the year ended Jan 1, 2011 was $15.1M or a $0.64 loss per share, compared to a net loss in the prior year of $12.9M or a $0.65 loss per share.
The increased loss was due to increased engineering, sales and marketing costs associated with new technology development and sampling and qualification efforts at various OEMs and end-users.
These results include stock-based compensation expense for the year ended Jan 1, 2011, of $1.5M compared with $1.5M in the prior year.
Total operating expense declined to $6.5M from the $7.9M in the previous quarter as we had estimated during the last quarter’s call.
at the 16:45 minute mark
We expect that operating expenses will be flattish at the Q4 level during the first half of the year, and then ramp slightly in the second half.
Year-over-year total operating expenses increased from $16.4M to $25.8M, primarily due to increases in non-recurring engineering charges, headcount, material expenses related to product builds, primarily for HyperCloud development and legal fees, as we increase patent filing and protection activities in the high performance computing market.
The sales and marketing spend has also grown during the year as we expanded sampling and qualifications activities by a large percentage.
And invested in a new head count necessary to execute our vertical marketing strategy of engaging with end-user customers directly, including programs that are moving forward with industry leaders in financial services and virtualization.
at the 17:40 minute mark
Sales and marketing expense is also expected to somewhat flatten in the coming quarters as we have readched the level necessary to support our target vertical programs and to work directly with the growing base of existing and potential customers to secure sockets.
at the 17:55 minute mark
These increases in R&D and sales and marketing throughout 2010 have been partially offset by a small decrease in SG&A expense between years, and also between consecutive quarters.
Earlier we also mentioned a significant increase in manufacturing productivity with no related increase in cost.
We anticipate continued productivity increases as we go forward in 2011.
at the 18:20 minute mark
At Jan 1, 2011 we have provided a full valuation allowance against net deferred tax assets.
The effective tax benefit rate of 5% for the year ended Jan 1, 2011 represents the benefit of a one-time operating loss carryback, resulting from the announcement of an economic recovery-based tax legislation.
On a go-forward basis we anticipate a rate near 0% until we begin to utilize our fully reserved net-deferred tax asset.
We ended our Q4 with cash, cash equivalents and investments and marketable securities totalling $16M, compared to $19 at Oct 2, 2010.
In addition we had unutilized availability of $2.2M on our credit line at the end of the quarter.
at the 19:20 minute mark
We were a net user of cash during the Q4, as cash was invested in operations – to support R&D, sales and marketing and also to support growth as our accounts receivable grew as a result of increased revenue.
And our inventory of longer lead-time components increased the support fulfillment of Q1 purchase orders for our NetVault family and base business and qualifications activities for HyperCloud.
at the 19:30 minute mark
During the Q4 capital expenditures totalled $224,000 compared to $84,000 in previous year’s quarter.
We anticipate investment in equipment to support our new products over the next several months of approximately $500,000.
at the 19:50 minute mark
We continue to be mindful of our cash use and will continue to find ways to control our burn rate, even as we continue our expressive (?) cross-product (?) and marketing initiative.
We expect to use a mix of cash and some credit from our line to finance these investments until we reach financial breakeven, which is expected later this year.
at the 20:10 minute mark
We also anticipate sufficient capacity on our current $15M line of credit for working capital needs.
Question & Answer session:
at the 20:45 minute mark
Rich Kugele – Needham & Co:
Thank you. Good afternoon.
Uh .. just a few questions from me. I guess first .. um .. on HyperCloud. Last quarter you had talked about a component issue that had forced a kind of a .. restart in the qual process.
Today you are talking about being back in qual, so I assume the component issue has been resolved and .. any comments there ?
Chuck Hong:
Uh .. Rich .. you are referring to .. uh .. DRAM specific component issue.
Yeah that was resolved .. uh .. as of probably 2 months ago.
Rich Kugele – Needham & Co:
Okay .. um .. and from a .. from a breakeven standpoint, what should we assume the revenue would need to be, and is it possible to reach that revenue at some point in 2011 .. um .. if HyperCloud isn’t a material part of the mix ?
Gail Sasaki:
Hi Rich. We expect about a $20M revenue per quarter for breakeven. And it is possible to reach that with our our baseline business plus our NetVault family.
Rich Kugele – Needham & Co:
Uh .. are you willing to give us a sense in Q1 on what the mix might be between the base business and the NetVault line ?
Just .. a relative mix between the two categories.
Gail Sasaki:
Um .. I would if Q1 was over, but I think it is still a little early.
Rich Kugele – Needham & Co:
Okay. Alright, well I’ll get back in the queue. Thank you.
Gail Sasaki:
Thanks Rich.
at the 22:45 minute mark
Arnab Chanda – Roth Capital:
Yeah hi. Couple questions. First for Chuck maybe you could tell us a little bit about .. does it seem like .. maybe I misunderstood please let me know .. that you know really it’s more of a NetVault 2 (means “HyperCloud 2” – corrects below) that is going to see any kind of adoption .. because maybe the OEMs first want to evaluate and take a look at .. you know technology that’s so different .. than you know what they’ve used in the past ?
Or is there a possibility that you’re going to get .. I’m sorry .. I’m talking about HyperCloud .. “HyperCloud 2”.
Is that is that more likely .. could you see some adoption on “HyperCloud 1” ?
I’ll followup .. thank you.
Chuck Hong:
Yeah, Arnab. The current product that’s in testing and qualification is obviously “HyperCloud 1”, and it’s gone through .. uh .. many months and quarters of testing.
And once that’s completed that will start to ship into the current server base – mostly Westmere, the Intel Westmere as well as the AMD Magny-Cours based servers.
Uh .. the gen-2 product is targeted for the next-generation and that will be the Romley, which is expected to launch at the end .. very end of this year.
at the 24:35 minute mark
There will be Westmere .. uh .. will continue to ship well into 2012, so .. uh .. we expect to see the HyperCloud 1 product ramp .. uh .. this year, after qualification, and be sold you know well into 2012.
While we will get the HyperCloud 2 product out – that is faster and that is lower power and we will .. you know .. start get those products evaluated .. early.
And to get them qualified and get ready to ship you know when Romley launches at the end of the year.
So probably see them .. uh .. ship concurrently.
Arnab Chanda – Roth Capital:
Ok, great. If I can ask another qualitative question about the adoption of HyperCloud. Seems like it is roughly taking at least a year longer than maybe what what you thought about or hoped for initially.
What .. are there kind of .. could you talk about what the factors are .. is it because the product has certain issues, is there market adoption question .. can you talk a little bit about what you think has caused it to take longer than you .. than originally had thought.
at the 25:55 minute mark
Chuck Hong:
I don’t think it took a year longer than we anticipated .. uh .. you know this is a .. as you know I mentioned this is a highly complex chipset, and you’ve got a fairly complex ecosystem of CPU, BIOS, you have server .. uh .. OEM server manufacturers, you have DRAM manufacturers.
All of this has to come together seamlessly, and you’ve got you know any one of these OEMs, several dozen server platforms.
So it was .. quite a bit of WORK.
Uh .. in terms of making the product plug and play and compatible.
So .. uh .. probably took you know longer than we expected, but certainly was not a year longer than we had anticipated.
This product as we mentioned was in prototype form at the beginning of last year (2010).
It’s been about 12 months since that time.
And .. uh .. things are progressing very nicely at this point.
at the 27:10 minute mark
Arnab Chanda – Roth Capital:
So .. uh .. Chuck .. I’m going to ask a question on that .. uh .. first two questions if I could.
One is – do you think you could have invested more in R&D – is that kind of your more cautious on investment and that’s part of what it took longer ?
Or .. and then secondly do you expect any revenues from HyperCloud at the end of this year, or is it more likely to be in 2012 ?
at 27:30 minute mark
Chuck Hong:
Um .. we invested .. you know there are different parts of the R&D that go into a product like this.
First is on the chipset itself.
Uh .. the architecting, the design, the specing out (spec = specification) of the chipset.
Implementing the silicon .. um.
And then you have to bring up in the application level .. uh .. you know in retrospect we probably could have .. uh .. spent more resources on the latter part of the R&D.
That’s you know those are kind of the last mile issues that issues took us a long time to .. uh .. you know resolved and get our arms around.
The other thing was that customers at the different server manufacturers – they’re constantly tweaking as well.
DRAM manufacturers are tweaking their DRAMs and server are tweaking their server boards – the thermals, the electricals and so forth.
So .. uh .. you know we .. it is a gauntlet that took us longer than we had anticipated to get through.
Lot of good feedback in that process from the customers and the enablers – the technology enablers that are out there.
And you’ve got a much more robust solid product than we had started out with a year ago.
Uh .. in terms of revenue traction. Definitely .. uh .. you will see .. it will not be 2012.
You will see traction .. uh .. you know, fairly quickly.
Arnab Chanda – Roth Capital:
Thanks Chuck.
Gail Sasaki:
Thanks Arnab.
at the 29:35 minute mark
Orrin Hirshman – AIGH Investment Partners:
Hi how are you.
Um .. can you just mention a little bit in terms of utilizing the credit line etc.
And the implication of your comments was that you can survive without raising additional equity .. until you can get to be profitable, cash flow positive.
Can you comment a little bit more on that – number one. And then I will followup on HyperCloud.
at the 30:00 minute mark
Gail Sasaki:
We believe that we have (unintelligible) cash and working capital availability on our line for the next 12 months.
And we .. you know if .. if and when (?) we will consider you know sources of capital raising .. um .. to buffer our balance sheet.
But .. we do not have any plans currently.
You had a question about HyperCloud ?
at the 30:30 minute mark
Orrin Hirshman – AIGH Investment Partners:
Yes. You answer the question in terms of – one, we can hope to see HyperCloud revenue .. but .. can you also answer just (if there is) anything on the competitive front that’s really come close that’s slowed you down in terms of the qualification processes at any of the major OEMs ?
at the 30:45 minute mark
Chuck Hong:
Uh .. the competitive products .. uh .. you know have been out there and have been anticipated.
Uh .. it’s .. uh .. fairly independent of the progress that WE’VE made, and the process that we’ve undertaken to get the product tested and qualified at the OEMs.
It’s been independent of .. what the competitive .. uh .. products have done.
We have .. uh .. we have a .. much faster .. uh .. and a better performing product .. than the LRDIMM which is the .. uh .. a similar product that is .. that is going to be available for Romley .. uh .. that product is not available at Westmere .. um .. currently.
So .. we believe our product .. with higher performance will .. win out.
And that is .. the feedback that we have received – objective feedback – from the OEMs.
As they’ve gone through performance testing of our product vs. the LRDIMM in .. in their servers.
Orrin Hirshman – AIGH Investment Partners:
Ok, thank you.
at the 32:30 minute mark
Ian Mendoza – Prospect Capital
Uh .. hi guys.
Had a .. couple of questions. Some of them were were just answered .. uh .. but could you maybe talk a little big about .. uh .. what you’re seeing on the competitive front with .. uh .. with NetVault .. any .. new entrants there.
And maybe as part of the answer you can remind me if you are sole sourced at DELL, or if they use someone else .. as well ?
at the 33:00 minute mark
Chuck Hong:
Uh .. on the the NetVault .. product.
Um .. you know there are .. uh .. smaller .. uh .. competitors.
We .. are .. we believe we are their only .. manufacturer .. of .. a .. uh .. independent manufacturer of .. what we call the “cache to flash” product category .. uh .. that is shipping this product in high volume into major OEMs.
There is .. uh .. another major OEM that is building a similar product .. in house.
Um .. so this NetVault product .. uh .. currently .. uh .. at DDR2 is being shipped into .. server .. uh .. RAID backup .. applications.
Uh .. but we are starting to see much .. more .. opportunities in .. storage .. uh .. which is a much broader, many many more applications in the storage space.
Uh .. and that’s where we’ll probably. We are entertaining those opportunities today with the current DDR2 NetVault as well as the DDR3 NetVault product which is soon to be introduced.
Uh .. so .. small customers .. uh .. small competitors for this .. um .. for the current NetVault product .. um .. at at the next-generation for the storage applications, you know, we’ll see who’s out there.
at the 34:50 minute mark
Ian Mendoza – Prospect Capital
Ok. What are the qualification cycle times like for the storage opportunities – are these 2011 opportunities or more .. 2012 ?
Chuck Hong:
There are a number of .. uh .. qualifications that are under way .. uh .. today with the existing DDR2 NetVault.
And we will .. uh .. you know we probably have .. uh .. a dozen opportunities with with DDR3 NetVault in the various storage and industrial type of applications.
And the qualification cycles are .. uh .. quite long .. um .. with with the NetVault product because it’s .. uh .. you know it’s mission-critical and it’s .. uh .. a lot of integration that needs to happen between .. uh .. our subsystem and and the storage systems.
at the 35:50 minute mark
Ian Mendoza – Prospect Capital
Ok. That’s helpful.
And I had one one question getting back to HyperCloud.
Uh .. in the press release about the (testing ?) – I think it made some reference to .. uh .. uh .. memory densities of 288Gbits (should be 288GB i.e. 288GBytes) and I thought that .. uh .. the spec was for 384 (i.e. 384GB) ?
Has the spec changed as you’ve gone through the process of refining the product or or is this .. so I guess that is the first question, and if it has will the spec be different for “HyperCloud 2” than for “HyperCloud 1” ?
at the 36:25 minute mark
Chuck Hong:
Well, I I think in the latest .. uh .. uh .. InterOp show .. uh .. was a few months ago in New York.
Uh .. we demonstrated 288 GIGABYTES (288GB) of .. memory .. in a server.
Uh .. which then on the screen showed that it was hosting 100 clients.
Uh .. with the benefit of 288GB running in that server.
Uh .. and that that’s been the maximum that we have demonstrated.
Initially the 384GB .. um .. memory capacity was .. uh .. advertised.
Our product is capable of .. uh .. running upto 384GB, however server systems are not there .. to do that.
They would need 4 DIMMs per channel .. uh .. in order to accomodate 384GB – there is currently no server out there.
The maximum DIMM sockets per channel is 3 today.
So if they get to 4 .. uh .. we would we would be able to get to those high kinds of densities.
(Explanation: 4 DIMM sockets per channel x 3 channels per processor x 2 processors in 2-socket server = 24 DIMM sockets total in that 2-socket server, and using 16GB HyperCloud would give a total of 24 x 16GB = 384GB total memory if you have 4 DIMM sockets per channel)
at the 37:45 minute mark
Ian Mendoza – Prospect Capital
Ok.
Then were there any .. speed tests done as part of that testing or or that you were able to kind of do at at .. InterOp.
Chuck Hong:
Yes .. I believe it was running at .. uh .. 1333MHz (i.e. full speed).
at the 38:05 minute mark
Ian Mendoza – Prospect Capital
Ok. So that part (unintelligible). Very good. That’s helpful.
And I guess, kind of last question, you know with the qual cycles and the expense of targeting these major OEMs with with NetVault AND HyperCloud, how do you .. how do you prioritize or .. are you forced to prioritize your your resources on on kind of the opportunities that you think are best ?
Or are you able to kind of go out there and .. and kind of .. be in all the bakeoffs (i.e. contests).
at the 38:30 minute mark
Chuck Hong:
Well .. with the major OEMs and .. uh .. you know .. uh .. major server and storage OEMs, that that storage .. that prioritization has already .. you know .. is defacto in place.
I mean that .. that took place, those decisions were made on .. uh .. where to qualify, who to engage, you know many quarters ago.
And .. it’s just taken a long time to work through that process and .. that’s that’s you know we feel like we’re getting down to the .. you know 2-yard line on all of those .. uh .. qualifications.
And so those .. those priorities have already been set.
And in terms of the new priorities, with NetVault .. uh .. you know you go through .. uh .. an ROI (return on investment) analysis of the .. uh .. the potential opportunities .. uh .. and .. uh .. you know we pick and choose the the most attractive ones as would ANY business.
Ian Mendoza – Prospect Capital
Very good. Alright. Thanks guys.
Gail Sasaki:
Thank you.
Chuck Hong:
Thank you.
Operator:
.. no further question .. like to turn the call over to management for closing remarks ..
at the 39:45 minute mark
Chuck Hong:
Thank you all again for .. uh .. being involved with NLST and we look forward to sharing further information on our progress in the upcoming quarters.
Thank you very much.
Thanks, netlist
A long one there. Going to have to set aside a few minutes to go through it all.
Sorry, I should have posted a link to the accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=30299&mid=-1&tof=1&rt=1&frt=2&off=1
NLST Q4 2010 earnings call transcript (not exact) 3-Mar-11 06:17 am
Thanks, netlist.
http://www.netlist.com/investors/investors.html
ROTH OC Growth Stock Conference
Wednesday, March 16, at 11:00 am Pacific Time
http://www.wsw.com/webcast/roth24/nlst/
Wednesday, March 16, at 11:00 am Pacific Time
23rd Annual ROTH OC Growth Stock Conference
March 13-16, 2011
Laguna Niguel, California
Roth Capital Partners
——————–
Moderator:
I’m very pleased here to welcome another .. of our local favorites – Netlist Inc.
And we are very fortunate to have .. uh .. Chuck Hong, President and CEO, to make the presentation.
Chuck Hong:
Well, thank you for joining us today and .. uh .. I’d like to take this opportunity .. to go through .. uh .. the memory challenges .. uh .. server memory challenges in cloud computing.
And how NLST solutions .. uh .. help address and resolve some of those .. uh .. issues.
(pause)
Brief overview of the company. We were founded 10 years ago here in Irvine (CA) .. uh .. two miles down the road.
We .. got public on NASDAQ at the end of 2006 .. uh .. and we have a factory.
Most of our design, sales and marketing .. uh .. work is done here in Irvine and the manufacturing of our end memory module products are done in .. Suzhou, China outside of Shanghai.
Over the years most of our business has been done with .. uh .. major OEMs .. uh .. IBM, HP, DELL, FFIV.
In the past 3 years since IPO, the company’s embarked on a couple of major breakthrough technologies .. uh .. one is HyperCloud, and the other is NetVault.
We’ve got an extensive patent portfolio build around those .. uh .. two technologies.
Our target market is .. uh .. is in the cloud, in the data centers .. uh .. for storage and servers.
And combined, it’s about a $3B addressable market .. uh .. this year. And growing.
at the 2:10 minute mark
Uh .. as many of you have heard about the .. the emergence of cloud computing .. um .. I think the thing to note is that cloud computing – most of the applications in the cloud are highly .. uh .. DRAM memory-intensive.
You have various different kinds of memory which we can go through here, but .. DRAM is the main .. uh .. memory in a server which .. uh .. interfaces with the CPU.
Uh .. so in .. in a lot of the social networking, video downstreaming, virtualization .. uh .. where you’re reducing the number of servers to get more efficiency out of each.
High performance computing (HPC) where you are doing simulations to .. uh .. and modelling .. um .. securities trading. All of these require .. uh .. quite a bit of .. uh .. they are all DRAM-intensive.
at the 3:10 minute mark
On the other hand, on the supply side .. um .. you see huge shifts in the DRAM landscape and .. (pause) .. for the first time this year, flash will exceed DRAMs in terms of worldwide shipments .. uh .. and DRAM investments will be decreasing. You’ll have less and less DRAM manufacturers putting in big dollars.
Uh .. they face financial as well as technological .. uh ..difficulties in progressing DRAM density .. uh .. over .. DRAMs have been around 30 years, but it’s kind of now hitting the ceiling in terms of .. uh .. the density progression.
And so DRAM technology is not keeping up, despite the increases in the demand for DRAMs.
at the 4:10 minute mark
So .. so all the pace of technology creates the need for faster and denser memory.
These are some of the variables:
– multicore processors that are built by INTC and AMD require more memory
– virtualization – fewer servers doing the work of many servers, require more high density memory (high density because number of DIMM sockets being limited)
– and cloud computing .. uh .. where you have server consolidation, requires more memory (each VM running in VMWare requiring 4GB or so per VM, for example, with each processor core running a few VMs per core)
That results in what we call a “server memory gap”, where as you can see here starting in the next couple of years, you will see a huge gap between what the ideal memory is .. uh .. needed in these servers, compared to what will be available from the industry .. uh .. without our solution.
at the 5:05 minute mark
The other way to frame this problem is I/O congestion (I/O = input/output).
And I know .. uh .. there has been a lot of talk about .. within the network .. uh .. I/O bottlenecks creating problems in cloud computing.
If you were to do more of .. uh .. you were to run your servers more efficiently and do more of the work within the server, between the CPU and memory .. uh .. there would be less of a need to go OUTSIDE of that server to fetch data.
So, because the servers of today are not being run efficiently, there is a lot of data having to go out to the solid-state drive (SSD) or to the hard drive. And that is creating .. I/O congestion.
And some of the factors that are impacting that is .. the write speed of the hard drive, the location of the storage devices relative to the server, and the utilization of the server.
at the 6:10 minute mark
So if you look at the various types of .. uh .. memory .. uh .. and this is .. you can look at this as a storage hierarchy of data .. uh .. within a server and then a network.
Starts with a CPU and there is SMALL amounts of cache memory in an INTC or an AMD CPU.
And then you have DRAM – that’s your main memory, that’s your volatile memory (volatile meaning it goes away if shut off power).
You have then PCI-SSD .. uh .. which is a solid state drive being run on a PCI (socket) .. uh .. and Fusion IO is an example of that solution, and then you have rotating media which is the hard drive (rotating disk platters).
at the 6:50 minute mark
And then you’ll see those numbers .. um .. DRAMs are run at nanoseconds – 10 nanoseconds.
SSDs are 10 microseconds (1 microsecond = 1000 nanoseconds – thus 10 microseconds = 10,000 nanoseconds).
And then you’ve got 100 microseconds for .. uh .. SATA SSD (100 microseconds = 100,000 nanoseconds).
And then you’ve got hard drive being run at .. milliseconds (1 millisecond = 1000 microseconds = 1000,000 nanoseconds).
And those are at the order of magnitude of a 1000.
You see that DRAMs – nothing can get to the speeds of DRAMs.
And, this .. in the server .. and in the storage space.
So if you are running a .. if you are pulling up a youtube video.
If it is run off DRAMs, you are going get a seamless .. uh .. you are going to get good quality.
If there is not enough DRAM .. not enough FAST DRAM, you would have to, the CPU would have to go out to the SSD to the hard drive to fetch that video. And that’s where you are going to see a lot of the buffering.
Same thing in .. uh .. financial transactions.
In high speed trading, high frequency trading, you want to do that off a DRAM and not go out to the hard drive, or you will have lost that trade (because high-frequency trading depends on making a trade well before others in the market and they make money from the small time-difference advantage they have over other traders).
at the 7:50 minute mark
So here is a look at our product, and basically the .. the core of this product is the chipset which controls all of the DRAMs.
You have a register device, and an isolation device.
One performs what we call “rank multiplication”.
The other 9 devices perform “load reduction”.
“Rank multiplication” is simply taking 2 lower-density DRAMs and making it look like one to the CPU.
“Load reduction” means you are loading .. the the .. you are reducing the load on these chips so that the chips will run faster.
And those are the two .. uh .. IP – the fundamental IP that we have.
And our DIMMs, our memory modules reside next to the CPU in a server. And this is what it looks like.
at the 8:45 minute mark
So a diagram for what our product does in a server – you have the CPU .. uh .. on top.
What we are essentially doing is we are making the data transfer from the CPU to the DRAM – main memory – run MUCH faster and allowing the CPU to recognize all of the memory that resides there.
Without our chipset, with our technology .. uh .. the data would be transferred very SLOWLY and then they would have to go out to the disk drive to fetch the data, which FURTHER slows down the transactions.
So with our chipset we have a 44% increase in the bandwidth and a 100% more memory capacity that the CPU can recognize and .. and act on.
at the 9:40 minute mark
So these are some of the applications that would benefit greatly from .. uh .. the faster and bigger – faster data transfer and wider bandwidth between the CPU and memory.
– virtualization mem cache (memory cache ?)
– oil and gas
– EDA (electronic design automation ?)
– and search applications
at the 10:00 minute mark
So, HyperCloud minimizes I/O congestion.
Simply we are making that server run more efficiently, so it does not have to go OUTSIDE of the server, and tax the I/O to get to the data.
And some of the endorsements from the OEM side, validating the values of greatly memory footprint in the systems of the OEMs.
On the .. on the end customer side .. uh .. companies like VMWare and MSC software .. uh .. see our product and our IP to be complementary.
They are trying to make their software run that server more efficiently.
And without the necessary complementary hardware .. uh .. that software will not be able to do it’s job.
So .. that’s .. these are some of the use case perspectives.
Um .. we are one of the two .. uh .. memory suppliers, or memory IP providers that are partnering with VMWare.
And .. uh .. it’s a highly .. complementary offering, as I said.
at the 11:25 minute mark
VMWare is trying to get one server to do the job of 10 servers.
But that 1 server then, through their software, through their virtualization.
But that one server then needs to have multi-core .. have the hardware .. uh .. that’s .. uh .. got the capabilities to .. run .. uh .. their software.
at the 11:55 minute mark
So .. talked a little bit about the rank multiplication and load reduction technologies .. uh .. these IP .. this IP came out of our initial work with AAPL, going back to 2004, where we created a chipset .. uh .. that is .. uh .. running the AAPL X Server (?).
And that was being run off an IBM Power CPU (PowerPC).
And that .. uh .. they were a sliver of a market back then, but that particular problem that we solved, working with AAPL .. uh .. has led to all of this IP creation.
And .. uh .. that problem of the I/O bottleneck between the CPU and the DRAM today is an industry-wide .. uh .. problem that exists in .. all servers.
at the 12:45 minute mark
Um .. so we we believe that this IP allows to be .. allows us .. positions well .. uh .. for the future where where the industry’s going, because they are going to REQUIRE these rank multiplication and load reduction technologies.
at the 13:00 mark
So where is .. uh .. the product today .. uh .. in terms of adoption .. um .. it’s been about a year’s worth of work that we’ve undergone with the major server and storage OEMs around the world.
And it is .. uh .. currently, we believe that the market is .. blanketed .. uh .. with our products.
It’s at all of the major .. uh .. major OEMs .. uh .. we are also having it currently tested by one of the major CPU vendors.
Uh .. we’ve got motherboard venders .. uh .. qualifications and CMTL, which is the INTC-compatible memory qualification lab.
Those qualifications have been achieved, so we believe that we are .. uh .. making good headway and .. uh .. towards .. uh .. achieving broad market adoption this year.
at the 13:55 minute mark
And the market opportunity for this .. uh .. HyperCloud product is .. significant .. uh .. looking out .. (pause) .. today we’re estimating this to be a couple of hundred million dollar market opportunity for us, growing to a billion dollars in the next 3 years.
at the 14:20 minute mark
So that was HyperCloud – I want to go through a complementary product .. uh .. which we announced yesterday .. uh .. called the EXPRESSvault.
And this is a product you have a lot of .. HyperCloud .. if you look at HyperCloud, that .. uh .. that makes the server run efficient by getting the CPU and main memory to talk to each other much more quickly, efficiently.
What THIS product does is .. uh .. backs up that data.
If there is a power outage .. some sort of interruption .. uh .. you are going to have that data, which is volatile and live – an ATM transaction that is ongoing – if the power goes out, that data needs to .. survive.
And that’s what we do with EXPRESSvault. We are backing up that volatile data.
And this is some .. this shows how important .. uh .. data protection is .. is that .. uh .. many companies actually experience .. uh .. data loss and .. uh .. when they do, one of 3 go out business within 2 years.
So protection of volatile data .. uh .. transaction data, esp. mission-critical data, within a corporation .. uh .. is very important.
And the target markets for this product is similar .. um .. it goes into a server as well as storage applications, into grid-computing and a lot of electronic financial trading applications.
at the 15:55 minute mark
And here is .. this may look complex, but it is not. This is actually how that data gets backed up.
You have the data blocks entering on the left into the CPU – that data then goes to the DRAMs and goes back and forth with the CPU – CPU does the data compute.
If the power goes out in the middle of that data compute, that data is stored in this product – the EXPRESSvault.
And from that storage it can store it into the hard drive and pull it back out and then .. and then give it back to the DRAM and the CPU when the power is restored.
at the 16:35 minute mark
So here is what this product looks like, and some of the IP that resides in this.
It is a PCIExpress interface (PCI = socket on motherboard).
And it is a .. uh .. the bridge controller device which converts PCI to DDR2 (DDR2 DRAM memory) is our IP.
And we create that engine and .. uh .. also importantly is a NetVault module which we’ve .. uh .. already have and are shipping in high volume into the DELL PowerEdge product.
That is .. that is the IP that takes data from DRAM and stores it into flash at the point of power interruption.
And you have .. uh .. this ultracapacitor technology which replaces battery.
And so ecologically you get rid of the battery elimination .. battery problem .. uh .. and the need for the technician to go out to the OEMs to check battery every .. every year, pretty much.
So in comparison to other potential solutions that are out there, we believe that we have .. tremendous value in terms of cost/performance.
As you can see our solution .. EXPRESSvault .. uh .. has much higher throughput, is much faster and .. uh .. it’s a relatively inexpensive solution compared to an SSD.
So that has been introduced into the market in the last few weeks and is being designed in at some of the OEMs.
So to summarize these two major technologies – they fit very well into the financial markets.
And we work very closely with .. uh .. big .. big banks and institutions on Wall Street to .. have them recognize directly the value of our IP .. which then they call out to the hardware manufacturers .. uh .. in their .. huh .. technical requirements.
at the 18:50 minute mark
As a business, in the last 10 years, we started out building a lot of .. uh. . memory module products that are based on thermo-mechanical innovations – stacking more memory in a given space. And we got very efficient at doing that.
At a certain point it got to a point where you can pack in all this memory, but .. physically .. but there was a server .. uh .. bottleneck .. between the CPU and the memory.
And so we got into the .. electrical side of this .. and created chipsets .. uh .. the logic in particular, that facilitiated the data transfer and .. uh .. addressed that bottleneck issue.
So, moving forward, we are evolving. We are still a memory module company, but we are evolving to .. uh .. also to .. uh .. to become a designer of custom logic, and over time an established semiconductor company.
at the 19:50 minute mark
And we believe that today it is a niche market. At the very high end .. addressing the financial markets and also virtualization, data centers, but that will evolve into industry-wide adoption of this technology as the servers become faster. And the DRAM manufacturers continue to have issues trying to progress the density of their DRAMs.
at the 20:10 minute mark
So very quickly on some of the financial highlights.
As you see that .. uh .. our gross margins .. uh .. the product gross margins are steadily increasing.
R&D (research and development) .. uh .. we have expended quite a bit of R&D to create the next generation HyperCloud last year.
Uh .. we believe that will .. uh .. flatten out this year, because the bulk of the spending was done last year.
at the 20:42 minute mark
Uh .. the balance sheet as of Jan 1, 2011 is .. uh .. shows how we’ve got sufficient cash .. uh .. and resources on hand to continue to .. uh .. roll out our product into the market this year, and continue the R&D efforts as well.
at the 21:00 minute mark
So in summary, we are a company .. uh .. that has a long track record with the major OEMs – the guys that we are targeting today to adopt – and that are testing currently our new products – the HyperCloud, the NetVault.
And then in the end, they also will become the adopter of our IP. So it is important that we’ve got a long-running relationship with these customers.
We are addressing a VERY large market .. um .. and we’ve got a strong IP position in these two seminal .. uh .. technologies.
at the 21:45 minute mark
And as you .. as I’ve just explained that we’ve got flexibility in this business model.
As we are today .. uh .. a memory module provider, manufacturer, designer, but we are also a designer of custom ASIC (application-specific integrated circuit) logic chip .. uh .. so that can evolve into a fabless IC and an IP licensing model as well, as the market gets .. more mainstream.
at the 22:15 minute mark
And .. uh .. so therefore we think there will be long-term ROI (return on investment) on the investment that has been made in the last couple of years and that we continue to make. ROI that’s going out through the rest of this decade.
And we’ve got a a strong management team which is .. still holds a significant stake in the company .. uh .. and we’re at this for the long term.
So with that, I’ll .. uh .. open it up for some questions.
Question and Answer:
at the 22:50 minute mark
Moderator:
Yeah Dave.
Analyst:
What’s going to make your TAM (Total Addressable Market) or your SAM (Served Addressable Market – portion that would be able to serve) stand out .. kind of .. trading .. supporting (?) the Jeffries numbers ?
Why why does that really start to mushroom out. What’s the real change that is forcing that ?
(Explanation of TAM/SAM/SOM: http://answers.yahoo.com/question/index?qid=20060930204510AA4SAvf )
at the 23:00 minute mark
Chuck Hong:
I think the movement .. uh .. of servers to higher speed is critical.
There are two things – so on .. on the demand side .. um .. you’ve got servers .. you’ve got cloud computing which means more servers.
But servers also running much faster.
Today’s it’s running at about 1GHz (probably referring to the memory bandwidth i.e. 1333MHz, 1066MHz and 800MHz – as you increase memory loading on a server’s memory channel).
In a couple of years – in a few years it’ll move to 2GHz. That’s a huge jump.
Without this technology, the CPU will run that fast but .. memory will not be able to .. to run as fast.
So that’s one .. the other thing .. so at DDR4, our technology .. is looking to be adopted by the industry as the defacto mainstream.
Today it is a high-end market segment.
at the 23:55 minute mark
And then .. so the DRAM manufacturers will continue to have issues progressing .. uh .. their DRAM densities such that .. uh .. we’ll have to use more .. DRAMs to achieve the densities – more more DRAMs.
So when you use 72 DRAM chips vs. 36 (note this is not 32 and 64 because of error correction in server memory modules and why 36 and 72 is standard numbers they talk about), you are going to need to do the rank multiplication – use that rank multiplication technology that we have.
at the 24:20 minute mark
Analyst:
One one other question. You know we have OCZ (who was here) the other day, which makes solid-state drives (SSD) .. right .. and then you mentioned Fusion IO which is essentially “flash on the board” ..
Chuck Hong:
Right.
Analyst:
.. that interface (?) .. and then you guys have a different way of of accelerating ..
Chuck Hong:
That’s right, that’s right.
Analyst:
There are all these different things that are going on to make .. just to make a little bit faster in and out, so .. how do you .. what’s your synergy with with those guys you’re doing (?) and how do you care to play with each other – are they all necessary ?
at the 24:50 minute mark
Chuck Hong:
Well (in) some ways they overlap.
If CPU does a better job of .. transacting data with the main memory, you will have a less need in that server to go out to the SSD. Right ?
So .. going out to SSD is not .. uh .. the most efficient way to transact data when you are trying to .. when you are doing high frequency trading.
Or virtualization or high performance computing (HPC), so .. you know I don’t think we are DIRECT competition, but .. if one of those solutions does .. performs better at lower cost, then .. you know the solution moves there.
Analyst:
What’s the interplay like between your product and INTC and AMD, in terms of obviously INTC and AMD realizing that these kinds of bottlenecks are potentially limitations to (unintelligible) of their own product.
Chuck Hong:
That’s right.
Analyst:
They have a history of .. of incorporating .. uh .. and improving their own I/O and changing their own ..
Chuck Hong:
Right.
Analyst:
.. designs to incorporate some of these features, so how do you protect yourself from them essentially .. uh .. moving into this space or making changes in their processor design or board design (motherboard) that obviously (?) ..
at the 26:15 minute mark
Chuck Hong:
Right right, so INTC and AMD, as they startup on these server CPU designs – they start 5-6 years ahead of time.
So I don’t think they .. HERE they did not foresee the .. the onslaught of .. cloud computing and virtualization on the demand side, and on the supply side they probably did not see .. that the DRAMs would not get there.
Now for the next generation DDR4, they .. I don’t believe .. it doesn’t look like they’re going to make any more changes.
Their solutions – they’re going to have to .. in order to obviate this kind of a solution (i.e. neutralize NLST HyperCloud), they would have to come up with a .. bigger chip, more pin counts .. uh .. more power consumption.
That’s a multi-billion dollar plus solution.
We have .. off of THEIR chipset .. they also see this as more of a “memory industry” problem, not their own, although they are impacted by it.
So, it’s really the efficiency of the solution.
Ok, we believe we’ve got a much more efficient solution .. that .. uh .. is not a multi-billion dollar solution. Right ?
Moderator:
Ok, thanks .. thank you very much for the presentation.
Chuck Hong:
Thank you.
——————–
A link to the accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=30820&mid=30820&tof=1&frt=2#30820
NLST Roth Mar 17 – transcript (not exact) 17-Mar-11 08:03 am
Netlist,
Thank you for your efforts….Obviously you believe this company is going to make an impact or you wouldn’t go through all this work. Do you have any projections of your own as to when and where this can go. I had some of my own and I invested for the long term, but given Allen & Caron’s failed projections and the time line’s laid out by Hong that have come and gone time and time again. I have my doubts…
Anyone else has any projects/opinions – I am all ears.
Thanks again!
What is the status of this lawsuit?
memoryGeek
To the best of my knowledge the lawsuits are on hold pending USPTO patent review in response to request of INPHI.
comment on this update much appreciated..tx
http://intellectualvalue.org/?p=67
Hi Wedgecake,
My initial reason for writing this post was the mystery surrounding why Google acquired metaram’s patent portfolio. That interest has evolved into cheering a little for the underdog, in this case Netlist, as they battle to provide memory modules that can transform ordinary computers into something much more powerful.
Hi Bill,
It’s a fascinating case and a Google “dilutory” strategy while they “use” Netlist IP. Looks like USPTO has consolidated all suits into one re-exam. I wonder how many years before we can get their initial review results?
To anyone with USPTO experience:
Could the consolidation of all of the challenges be a sign that USPTO would like to expedite a decision in order to clear repetitive cases on the docket? Hence a more rapid decision.
Hi Wedgecake,
It is fascinating, and I hope that someone on any side of the litigation is taking notes, with the eventual aim of writing a book about it. I suspect the human side of the story is even more interesting.
Hi Fallguy,
No specific USTPO litigation experience here, but courts often do consolidate cases in the name of “judicial economy,” so that’s probably something that was carefully considered when the cases were joined together.
Thanks Bill,
I should have been more clear in my question. I am really wondering if it could be an indication of fast-tracking. I don’t know if they even do that kind of thing, or if it is first come first serve only. To me (I know that means little to the rest of the world) it seems that this is a patent is drawing a lot of attention, and these issue’s could very well be holding up implementation of the technology. Do they even care? Probably not.
Hi Fallguy,
I worked at the highest level criminal and civil lawcourt in Delaware for more than a decade, so I have seen this kind of consolidation and “fast tracking” take place many times. Judicial economy includes the concept of fast tracking when it makes sense to do, such as consolidating new cases with older ones when many of the issues that might be involved in one case will impact the outcomes of the other cases. I’ve seen many cases put on hold as well, for a decision to be made in one case that can have repercussions for the others. We’re talking about criminal cases that involved death penalties, and civil cases involving some of the largest corporations in the country.
Hi Netlist
I heard recently that next generation DDR4 is based on Netlist and Intel is talking to CEO Hong about industry wide license like Rambus. Do you think this would be a good deal for NLST to go the licensing route – since it appears that their internal development capabilities are suspect ? (Hong’s 2 yd line is more like 200 yds ?).
quote:
What is the status of this lawsuit?
quote:
Could the consolidation of all of the challenges be a sign that USPTO would like to expedite a decision in order to clear repetitive cases on the docket? Hence a more rapid decision.
GOOG/SMOD have reexams against NLST patents.
IPHI has reexams against NLST.
The USPTO has consolidated a total of 5 reexams into two reexams – one for ‘386 patent, and one for ‘912 patent.
This is probably simpler for NLST – as they can make consolidated responses as well. In fact given the way reexams are conducted, there may be no better way than to consolidate (esp. when reexams were all similar and happening at same time).
In these, the USPTO has completed the first office action – which frames the problem in hand – i.e. the “rejection” of the claims of the patent. As the reexam process proceeds the patent is built up from scratch – which is why the reexam process is nearly as long as a patent granting process.
Court cases were:
GOOG vs. NLST
NLST vs. GOOG
and
NLST vs. IPHI
IPHI vs. NLST
In detail:
GOOG vs. NLST was the earliest case (thus the most mature case and near jury trial). GOOG initiated this after NLST warned GOOG they were infrining. It was GOOG’s way to prevent injunction against it’s servers.
NLST vs. GOOG is at an early stage.
NLST vs. IPHI – where NLST claims IPHI’s “iMB” infringes.
IPHI vs. NLST was retaliatory lawsuit in which IPHI claimed that two of IPHI’s patents were being infringed by NLST. Patents related to buffers in general and were not specific to NLST IP (or even close to the IP that MetaRAM held – which conceded to NLST).
All these court cases are stayed at the (JOINT request of NLST/GOOG and NLST/IPHI) pending the reexams. The “stay” means the cases are frozen, but have to be updated with news from reexams. The case otherwise remains frozen (apart from various bureaucratic activity) pending clarity from the reexam process.
Recently the case IPHI vs. NLST was retracted by IPHI – asking court for dismissal.
IPHI backing out of IPHI vs. NLST is very interesting. In response to NLST PR about dropping of suit, IPHI issued a PR stating they were doing it because reexam were going so well, IPHI decided to cut legal costs (?). Technically this maybe valid – i.e. IPHI can restart the case. However, it is a very bad signal from IPHI. That is, a company like IPHI may not do this in such a high profile issue (since IPHI is touting itself as a next-gen LRDIMM supplier by providing the “iMB” buffer chipset for use by LRDIMM memory module makers).
This thread examines the possibility that the real reason for IPHI backdown may have been threat to IPHI patents (2 patents being used in IPHI vs. NLST) being invalidated if IPHI vs. NLST was pursued:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=31973&mid=32090&tof=4&rt=1&frt=2&off=1
Re: Bigger news .. INPHI capitulation 22-Apr-11 11:05 am
This is because in IPHI vs. NLST, NLST had counterargued that the IPHI patents had flaws of “double-patenting” (read the thread for more details) which may invalidate the second patent, and seriously damage the first patent (since newer patents failing to mention earlier patent can wind up restricting the earlier patent).
The combination of such actions, and IPHI’s hurried IPO (many of same analysts who cover NLST and IDTI – another buffer chipset manufacturer – who FAIL to ask IPHI about their standing in the upcoming LRDIMM market, YET are able to ask NLST about it quite freely).
Not to be ignored is the participation of a horde of institutions in IPHI’s IPO (and now secondary offering – 3.8M out of 3.9M shares belonging to insiders/management).
The horde of instituations: Jeffries, Morgan Stanley, Needham, Stifel Nicolaus – all bullish on IPHI and all participated in the IPO.
Add to that exit of IPHI’s CTO after 10 years.
Add to that the ABSENCE of a yahoo board for IPHI or at any other place. Can IPHI PREVENT the creation of message boards – and has it done that wilfully to aid hiding of info until “after IPO” or “after we sell”.
It is surprising these analysts do not ask:
– why TXN is exiting the buffer chipset for LRDIMM segment (possibly linked to TXN settlement with NLST – which was reportedly favorable for NLST).
– refuse to ask about legal stuff (which is now in IPHI’s “Risk Factors” section in SEC filings for share offering)
More in this post (from same thread as above):
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=31973&mid=32125&tof=3&rt=2&frt=2&off=1
Re: Bigger news .. INPHI capitulation 4 26-Apr-11 01:32 am
http://www.netlist.com/investors/investors.html
http://viavid.net/dce.aspx?sid=0000853C
Netlist First Quarter Results Conference Call
Wednesday, May 11th at 5:00 pm ET
Chuck Hong – NLST CEO
Gail Sasaki – NLST CFO
Jill Bertotti – Allen & Caron (Investor Relations firm for NLST)
Jill Bertotti:
(introductory remarks)
at the 1:50 minute mark:
Chuck Hong:
Good afternoon, Jill.
Thank you all for joining us to discuss the 2011 first quarter results (Q1 2011).
As you saw in the release, we got off to a solid start for the year, with 52% revenue growth over last year’s first quarter, as well as continued strong sequential growth of 19% over the 2010 fourth quarter (Q4 2010).
Margins remained about 30% during the period. And well above the levels of the prior year period.
The strong sequential growth in the period was due to an acceleration in demand for our Vault (NetVault etc.) family of products, as well as increase in the demand for our speciality memory modules which make up our base business.
We expect an overall revenue growth trajectory of about 20% per quarter to continue (20% for four quarters is 200% revenue change) throughout 2011.
This anticipated revenue and gross profit growth should reduce our losses significantly as we target financial breakeven during the second half of the year.
at the 2:55 minute mark:
First, I wanted to breakdown our recent commercial successes with the Vault family, previously referred to as NetVault, which is targeted primarily at storage data protection.
Demand for the battery-free NetVault – NVvault product has been strong and growing over the past few periods. But it really ramped in the first quarter, as demand more than doubled for the product.
End-users are very satisfied with the increased performance, and are drawn to the cost and environmental benefits that the device helps to achieve.
at the 3:40 minute mark:
That demand and our order flow are expected to grow as a result of a new DELL promotional push for their “CacheCade” technology which uses OUR device.
DELL and LSI have architected a way to increase the performance of an SSD configuration by 76% with an NVvault being an important part of that configuration.
The end-user response is expected to be very positive in the coming months.
at the 4:05 minute mark:
During the quarter we expanded our Vault family of data-protection products with the introduction of EXPRESSvault.
EXPRESSvault is a PCIExpress backup and recovery solution for cache data protection.
This product, like NVvault – battery-free – combines DRAM to deliver the high data-throughput required by cache backup applications with a non-volatility flash.
Early response to the product has been very promising. In part due to the proven track record of NVvault.
We anticipate that order flow for EXPRESSvault will gain traction steadily in the coming quarters, accelerating in the back end of the year and early in 2012.
at the 4:50 minute mark:
To summarize business for our Vault product franchise is strong and growing.
We saw demand for NVvault battery-free products doubling in the first quarter, and expect it could double again in Q2.
at the 5:10 minute mark:
Our flash and SSD business will steadily become a more meaningful part of our revenue mix over the next several quarters as we launch new SSD products and increase our number of wins with data center equipment and embedded system OEMs.
To that end, we expanded our flash product portfolio with two new SSD additions. The mSATA mini SATA SSD module offers storage capacity of up to 32GB with onboard 64MB DRAM cache. And the mSATA Slim SSD module offers storage capacities of up to 128GB with onboard 64MB DRAM cache.
at the 5:54 minute mark:
Both products’ smaller form factor support for ultra-dense applications makes them ideal for data center equipment where compute-density is critical.
As data center equipment becomes increasingly compact, NLST is committed to offering new solutions that address those space limitations.
at the 6:15 minute mark:
Another benefit of our growing success in flash is that it creates another commercial bridge to the storage market.
Due to the pace of change in growing demand, storage is among the most attractive technology markets today.
And our participation in that market with our flash and SSDs, Vault data backup and recovery and speciality memory modules, further diversifies our efforts beyond the cloud computing space.
at the 6:45 minute mark:
During the quarter, we continued to work closely with the major server OEMs, major storage OEMs, end-customers, DRAM and CPU suppliers, and motherboard manufacturers in ongoing qualification work for HyperCloud.
One recent result of our efforts is qualification of HyperCloud by CirraScale for it’s VB1325 Blade Servers.
CirraScale develops built-to-order blade-based computing and storage data center infrastructures.
The company selected HyperCloud to support it’s Blade Server because of the enhanced performance we bring for memory-intensive applications, such as electronic data design automation (EDA) and high performance computing (HPC) simulations.
Between now and the end of the year, NLST will be engaged in parallel efforts to win qualifications at the major server OEMs targeting specific market opportunities with Westmere platforms and at the same time we will continue to work towards broader OEM qualifications for Romley platforms.
at the 7:50 minute mark:
Potential OEM partners remain enthusiastic and supportive of HyperCloud and the benefits that they derive.
We are very encouraged with these working relationships as qualification efforts move ahead.
at the 8:05 minute mark:
As we move beyond DDR3 and into DDR4 technologies, the market for HyperCloud “rank multiplication” and “load reduction” capabilities will become mainstream for servers, in contrast to the niche high-end markets to us today at DDR3.
at the 8:20 minute mark:
At that point, we believe that TAM (Total Addressable Market) will grow .. a total available market (TAM) .. will grow significantly from the $300M to $500M today.
We are EARLY to this space and ahead of the industry in our design and intellectual property (IP).
Our goal is to remain in a leadership position as this opportunity escalates.
at the 8:45 minute mark:
Due to that market potential, some companies in the memory space have challenged our patent position related to HyperCloud.
While these processes will need to run their course, we are comfortable in our position and confident in the validity and enforceability of our patents.
In fact, at the end of the quarter, the USPTO issued 3 new patents that add to our growing intellectual property (IP) portfolio, protecting HyperCloud innovations that utilize “rank multiplication” and “load reduction” technologies.
at the 9:20 minute mark:
It is also important to note that none of our current or pipeline technologies rely on any single piece of intellectual property for protection and commercialization.
In summary, we are executing in all of our core business categories, as evidenced by 8 consecutive quarters of increasing gross profit performance.
We continue to position HyperCloud and our Vault family as technology standards for the industry.
We will also continue our investment in the next generations of both product platforms and take advantage of new opportunities in flash and SSD to provide performance benefits and higher density to our customer base in a dynamic storage and cloud computing market.
at the 10:05 minute mark:
Gail will now provide you a more detailed financial update and first quarter results (Q1 2011).
at the 10:10 minute mark:
Gail Sasaki:
Thanks Chuck and good afternoon everyone.
As you saw on our release this afternoon, revenues for the first quarter ended April 2, 2011 (Q1 2011) were $12M, up 52% when compared to $7.9M for the first quarter ended April 3, 2010 (Q1 2010).
Revenue for our Vault family of products – NVvault battery-free and battery-backed increased from the previous quarter by 12%.
The NVvault mix during the first quarter was, as we expected during our last call, weighted towards the higher ASP (Average Selling Price), more robust feature-set and battery-free version of NVvault.
Our OEM partners saw improved traction from their customers speaking to (?) operating, ecological and economic advantages of that product.
at the 11:05 minute mark:
Gross profit for the first quarter ended April 2, 2011 (Q1 2011) was $3.8M or 32% of revenues, compared to a gross profit of $1.8M or 23% of revenues for the first quarter ended April 3, 2010 (Q1 2010), an increase in gross profit dollars of 109%.
This improvement was due to the 52% increase in revenue, a favorable DRAM cost environment, as well as increased absorption of manufacturing cost, as we produced 64% more units than the year earlier quarter with only a slight 4% increase in the cost of factory labor and overhead.
We continue to plan on a range of between 25% to 30% for our gross profit percentage for the remaining quarters of 2011.
Which will be dependent on the quarter’s product mix, DRAM cost and continued growth in unit production in each quarter.
at the 12:00 minute mark:
Net loss in the first quarter ended April 2, 2011 (Q1 2011) was $2.8M or and $0.11 loss per share, compared to a net loss in the prior period of $3.0M or a $0.14 loss per share.
These results include stock-based compensation in the first quarter of $353,000 compared with $382,000 in the prior year period.
And depreciation and amortization expenses of $581,000 in the most recent quarter compared with $578,000 in the year earlier period.
at the 12:35 minute mark:
Total operating expense was flattish at $6.6M from $6.5M in the previous consecutive quarter (Q4 2010), as we had estimated during the last quarter’s call.
The increase from $5.6M in the year earlier quarter (Q1 2010) was primarily due to higher non-recurring engineering charges, headcount and material expenses related to product sales, primarily for HyperCloud and NVvault development.
Sales and marketing expense was also flat between consecutive quarters but did increase by 18% from the year earlier quarter (Q1 2010) as we have expanded sampling and qualifications activities by a large percentage and invested in new headcount necessary to execute our vertical marketing strategy of engaging with end-user customers.
at the 13:20 minute mark:
We expect that operating expenses may increase by 10-15% during the second quarter (Q2 2011) and stay flatting throughout the remainder of the year.
We did not record a benefit for income taxes for the first quarter ended April 2, 2011 (Q1 2011) as operating loss carry forward generated were fully reserved.
On a go-forward basis we anticipate a rate near zero percent until we begin to utilize our fully reserved net deferred tax asset.
at the 13:50 minute mark:
We ended the first quarter with cash, cash equivalents and investments and marketable securities totalling $12M compared to $16M as of Jan 1, 2011.
During Q1, we took delivery of $3M of critical long lead time components to support fulfilment of flash, NVvault and HyperCloud product lines.
We do not expect this level of buy ahead to continue as we now have more visibility into our supply chain after the Japan earthquake.
at the 14:20 minute mark:
At the end of the quarter we had unutilized availability of $2.9M on our credit line.
During the first quarter capital expenditures totalled $110,000 compared to $208,000 in the previous year’s quarter (Q1 2010).
We anticipate investment and equipment to support increased capacity in our new products over the next several months of approximately $500,000.
As mentioned in our previous call, we continue to target financial breakeven later this year.
However we will still be a net user of cash during the year as our accounts receivable and inventory continue to expand to support the increased revenue.
at the 15:00 minute mark:
As you know from past calls, we have got sufficient capacity on our current $15M available line of credit for working capital needs.
In addition, after the quarter end, we signed a term-limit agreement for an infusion of $3M from our bank partner to support general growth needs.
This gives us an additional buffer as we progress towards cash positive.
Thank you for listening in today.
Operator, we are now ready for questions.
Operator:
We will now begin the question and answer session ..
at the 15:45 minute mark:
Rich Kugele of Needham:
Thank you. Good afternoon.
Um .. just a few questions. I guess first you were just talking about the inventory, Gail.
Um .. did you see any supply disruptions ? Was that a .. just a precaution taking on that inventory.
Gail Sasaki:
Actually not seeing any disruptions yet. It was merely precautionary.
Rich Kugele of Needham:
Um .. and then just to get into some specifics in terms of the model.
Can you break down the revenue between the various categories, between the you know traditional business and NVvault, etc.
Gail Sasaki:
Sure. Um .. so ok, NVvault battery-free was 31% of our revenue this quarter.
And the battery-backed version was 30%.
So total of 61% for the Vault family.
at the 16:50 minute mark:
Flash .. um .. and other specialty memory was 38% of the 39%.
And HyperCloud was minimal.
Rich Kugele of Needham:
Ok, and then on previous calls you’ve .. Chuck you’ve talked about there could be some HyperCloud deployments outside of a qual if they were pulled from the customer end.
Have you had any traction on that front and .. um .. would you expect that to actually happen or would you expect a qual to happen first.
at the 17:25 minute mark:
Chuck Hong:
Hey Rich. We expect to see HyperCloud revenues .. uh .. starting here in the next couple of months.
We have orders and .. so we’ll see some shipments start.
We’re still working .. actively to achieve broad qualifications across many different customers.
at the 18:00 minute mark:
Rich Kugele of Needham:
Ok, and then what is the breakeven if you’re talking about being at least I guess EPS positive in later in the year. What would the breakeven be .. um .. and the anticipated OpEx (operating expenditure) I guess at that level.
Gail Sasaki:
Rich, I think we’ve mentioned in previous calls that it should be about $20M in revenue.
With .. with an OpEx of around $7M.
Rich Kugele of Needham:
Ok, great. I’ll get back in the queue. Thank you.
Gail Sasaki:
Thank you.
at the 18:50 minute mark:
Rich Kugele of Needham:
Ok, that was quick. Uh .. just wanted to get into the SSD side a little bit better.
Chuck can you clarify a little bit of the comments you are making about how NVvault is being used in .. an SSD system and whether or no you are also referring to your SSD modules also being included in the same system. Or is that a third element.
Chuck Hong:
No, that is a different element. You have in the DELL PowerEdge Server .. uh .. we’ve been supplying the battery product as well as the battery-less custom module for many many years.
So in the last .. uh .. 6 months we started to ship the NVvault and that gets integrated into an SSD configuration that .. uh .. that is designed by DELL and LSI Logic.
And our NVvault product gets integrated into that SSD product.
Uh .. which then improves the performance of that product.
So we believe that is gonna be a catalyst for continued ramp of the NVvault product.
On the flash product offering that we are starting to build out .. uh .. that is our own product.
That is targeted more towards industrial and embedded applications, small form factor applications.
Where they don’t .. the product is not taking up a standard hard disk HDD drive bay the way a SSD is an HDD replacement.
This product is is much smaller. It is a SATA miniSATA interface and it is going into various different military, industrial and .. uh .. some amount of data center applications where there are space constraints.
Rich Kugele of Needham:
Ok. That is helpful.
Um .. and then just lastly on the R&D front, you talked about a fairly meaningful sequential increase.
Is that tied to Romley or what is that extra expense tied to ?
at the 21:45 minute mark:
Gail Sasaki:
It is partially Romley. And it is also .. DDR3 NVvault.
Rich Kugele of Needham:
So some type of next-gen ..
Gail Sasaki:
Yes.
Rich Kugele of Needham:
Ok, but you would expect it to stabilize at that level ?
Gail Sasaki:
Yes.
Rich Kugele of Needham:
Ok. Alright, that’s it for me.
Gail Sasaki:
Thanks Rich.
at the 22:35 minute mark:
Keith Ellis of Midwestern Analytic:
Um .. Chuck maybe just want to shift gears a little bit if there are not any further technical questions.
A concern of our group has been your continued sale of stock, the performance of the company in general and I would also say, based on today’s call, it appears we are no closer to the “2 yard line”.
Talk a little bit about your motivation to sell stock and a little bit about what you receive in stock based compensation in your plans for the future.
Thank you.
Chuck Hong:
I don’t know whether it is appropriate on the call to talk about my personal .. you know financial transactions. Whatever stock is being sold it is off of a 10B51 plan that has been in place for a long long time, so it has you know it .. that is no different from .. uh .. any other executive stock sales at a public company.
at the 23:40 minute mark:
As to you know the performance of the company. We’re .. as we’ve outlined in this call .. uh .. we believe that you know we are doing all the right things to .. continue to build on a foundation of this recovery as you saw from the top line growth, we are confident that the top line will continue to grow through the rest of this year and into next year very .. uh .. consistently.
Uh .. so that by the you know towards the end of the year we’re getting to the point where we are not losing money.
So, given it is not as quickly as .. it is not happening as quickly as we would like. But the fact of the matter is the recovery is strong and the business is being build back up quite nicely.
at the 24:50 minute mark:
So and we continue to .. work with the major customers on the qualification of the HyperCloud and we believe that it will become a very important technology at Romley and at DDR4.
So hopefully that addresses your questions.
Chuck Hong:
Thank you for listening in and we look forward to continued interest in the quarters ahead. Thank you very much.
Accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=32522&mid=32522&tof=1&frt=2
NLST Q1 2011 earnings call transcript (not exact) 26 second(s) ago
http://www.netlist.com/investors/investors.html
http://viavid.net/dce.aspx?sid=00008B06
Netlist – 2011 Second Quarter and Six-Month Results Conference Call
Aug 15, 2011 05:00 PM (ET)
Participants:
Chuck Hong – NLST CEO
Gail Sasaki – NLST CFO
Matt Lawson – Allen & Caron (Investor Relations firm for NLST)
Chuck Hong:
Hi Matt.
Thank you all for joining us – to discuss the 2011 second quarter (Q2 2011) and six months results.
As noted in our release, we are pleased with another quarter of strong revenue and gross profit growth.
In the second quarter, revenues grew by 72% over 2010 second quarter (Q2 2010) and 33% sequentially over (the) last quarter (Q1 2011).
at the 2:40 minute mark:
Margins remained about 30% during the period – well above the levels of last year second quarter (Q2 2010) of 20%.
Losses have been significantly reduced through increased gross profit dollar contribution, even as we continue to invest in R&D (research and development) and targeted marketing programs.
Our cash-based loss was close to breakeven for the current quarter.
We continue to target during the second-half of the year, which will be accomplished by steady revenue growth across all product categories.
Now for some greater detail by product line.
at the 3:15 minute mark:
First the Vault family (NetVault, now called NVvault).
The revenue growth during the quarter was again anchored by continued strong demand for our Vault family of products.
Both the NVvault flash-backed battery-free (originally called NetVault-NV), and the original battery-backed version (originally called NetVault-BB).
at the 3:30 minute mark:
Vault sales in Q2 (Q2 2011) increased by 126% over the previous year’s Q2 (Q2 2010) and by 36% sequentially from Q1 of this year (Q1 2011).
In addition, since the introduction of ExpressVault, our PCIExpress backup and recovery solution, we are seeing adoption and system integration of our Vault technology at a growing base of new customers and expect to see production revenues from this new member of the Vault family during the second half of this year.
at the 4:00 minute mark:
Earlier this month, we introduced and began sampling our next-generation NVvault DDR3 product which combines the high performance of DDR3 RAM with our proprietary Vault cache-to-flash controller.
NVvault DDR3 extends market leadership (that) we have established with the current DDR2 generation.
By working directly with CPU manufacturers to facilitate a plug-and-play functionality in the next generation of servers, the NVvault DDR3 offers greater memory capacity and data restore capability in a standard DDR3 interface.
We have begun sampling this product to a broad base of customers and anticipate production revenues to begin in early 2012.
at the 4:50 minute mark:
The flash family.
In the second quarter (Q2 2011), flash sales more than doubled sequentially from the first quarter.
This growth is driven by our expanding embedded flash product portfolio and many design wins at multiple customers across medical, industrial and networking equipment segments.
We recently announced a new mini-PCIExpress SSD which features a smaller form factor, storage capacity of up to 128GB and up to five times the storage density of legacy solutions.
at the 5:30 minute mark:
These SSDs uniquely address space constraints challenging data center equipment.
We noted in last quarter’s call (Q1 2011), our growing flash and Vault portfolios allow us to participate in the high-gross storage market, further diversifying our business beyond the cloud-computing space.
at the 5:50 minute mark:
HyperCloud.
During the quarter we intensified our HyperCloud qualification efforts to server OEMs and end-user customers.
During the quarter we reached a notable milestone by surpassing $1M in book orders.
One of our largest end-user customer is a major internet retailer who is now using HyperCloud to upgrade their existing Westmere and Nehalem servers, in order to drive greater performance from their installed base of servers.
at the 6:25 minute mark:
Since the end of the first quarter (Q1 2011), we announced three new HyperCloud qualifications.
On last quarter’s call we discussed the qualification of Cirrascale in April.
Recently we announced NEC’s qualification and endorsement of HyperCloud.
NEC will make HyperCloud available with it’s LX-series of supercomputers, enabling various high performance computing applications in industry, academia and research.
Lastly, Ciara, Canada’s leading integrator of Intel-based servers, qualified HyperCloud with it’s Altas servers and Titan graphics processing unisystems.
Ciara customers are now able to run more advanced memory intensive simulations within a given time frame and increase of overall productivity.
In addition, HyperCloud memory modules were integrated, tested and validated with industry-leading NexentaStor open source software, reinforcing the product’s ability to support memory-intensive applications such as virtualization and storage.
at the 7:35 minute mark:
During the quarter we also made solid progress with other OEMs in the process of qualifying HyperCloud for the Romley platform – Intel’s next-generation of server CPUs.
As we stated in the past, unlike the LRDIMM memory, HyperCloud does not create significant additional system latency, and does not require special software support in the Romley in order for the memory to operate.
Still, our engineering team continues to work closely with the major server OEMs to run through the litany of component and end-system tests on the OEM’s Romley platforms in order to ensure the long-term reliability of the product.
We expect testing to be successfully completed whereupon HyperCloud technology would be formally adopted by the major OEMs.
at the 8:30 minute mark:
We expect HyperCloud products to be made available for sale by these OEMs concurrent with their launch of the Romley-based servers end of this years, or beginning of next year.
at the 8:38 minute mark:
Specialty DIMMs.
This month we introduced two new specialty DIMMs to NLST’s growing portfolio of products.
Both products address the specific demands of high-performance computing, cloud computing, data analytics and virtualized data center environments.
The first product, HyperStream is a low-latency server memory for high-speed applications available in 4GB and 8GB DDR3 configuration.
We are excited to have DELL’s Software and Peripherals Group make HyperStream available for sale, and in recent weeks a major financial services firm has placed an initial order of HyperStream for deployment in it’s data center.
We expect HyperStream sales to grow over the coming quarters.
at the 9:30 minute mark:
Second, our 16GB quad-rank Very Low Profile RDIMM or VLP RDIMM delivers high density memory into space-constrained systems such as blade servers, storage bridge bay (SBB – for example Boston Limited’s Igloo 3U NXStor SBB appliance) and networking equipment.
NLST invented the VLP module in 2003 based around an IBM specification, and have continued to improve it’s design over the years.
Recently, we applied our patented Planar-X technology to the VLP design to achieve 16GB density using the lowest cost per bit 2Gbit DRAM (seems 2Gbit x 2 i.e. dual-die compared to competitors’ 4Gbit x 2 dual-die which is more expensive).
By comparison, competitors must use the much more expensive 4Gbit DRAM to achieve the same 16GB density on a VLP.
at the 10:21 minute mark:
With NLST quad-rank VLP, OEMs are able to meet the needs of data centers’ demand for main memory capacity, while drastically reducing the cost of that memory (because 2Gbit x 2 dual-die is cheaper per GB than 4Gbit x 2 dual-die memory package).
We are currently in qualification with a major OEM and anticipate production revenue in Q4.
at the 10:40 minute mark:
As I mentioned earlier, we had a very positive quarter.
Not only with revenue growth and financial progress.
But with the addition of several new compelling products and continued progress with HyperCloud qualification efforts.
We will continue our investment in the next generation of our flagship product platforms – HyperCloud and the Vault family.
And take advantage of new opportunities in flash, low power SSDs and other specialty DIMMs.
All of these products provide high value memory solutions to our customer base in the dynamic storage and the cloud computing market and helps to create a foundation for strong growth for the company.
Gail will now provide you with a more detailed financial update on the second quarter and six months results.
Gail ?
at the 11:30 minute mark:
Gail Sasaki:
Thanks Chuck and good afternoon everyone.
As you (saw ? – unintelligible) on our release this afternoon, revenues for the second quarter ended July 2, 2011 were $16M up 72% when compared to $9.3M for the second quarter ended July 3, 2010 (Q2 2010) and up 33% from the $12M in revenue for the first quarter ended April 2, 2011.
Gross profit for the second quarter ended July 2, 2011 was $4.9M or 31% of revenue, compared to a gross profit of $1.8M or 20% of revenues for the second quarter ended July 3, 2010.
An increase of gross profit dollars of 172%.
This improvement was due to the 72% increase in revenue and favorable DRAM cost environment as well as the increased absorption of manufacturing costs, as we (unintelligible) 101% more units than the year earlier quarter, with a 16% increase in the cost of factory labor and overhead.
We continue to plan on a range between 25% to 30% for a gross profit percentage during the second half of this year.
This will be dependent on the quarter’s and second half of the year’s product mix, DRAM cost and continued growth in unit production in each quarter.
at the 12:50 minute mark:
Net loss in the second quarter ended July 2, 2011 was $1.5M or $0.06 per share, compared to a net loss in the prior year period of $4M or $0.16 per share.
These results include stock based compensation in the second quarter of $406,000 compared with $426,000 in the prior year period.
And depreciation and amortization expense of $602,000 in the most recent quarter, compared with $552,000 in the year earlier period.
at the 13:30 minute mark:
Our cash-based loss after adding back these non-cash items was reduced to $503,000 which is an improvement of 83% over last year’s quarter.
Revenues for the six months ended July 2, 2011 were $28M up 53% from revenues of $17.2M for the prior year period.
Gross profit for the six months ended July 2, 2011 was $8.7M or 31% of revenues, compared to a gross profit of $3.6M or 21% of revenue for the six months ended July 3, 2010.
Net loss for the six months ended July 2, 2011 was $4.3M or a $0.17 loss per share compared to a net loss in the prior period of $6.9M or $0.31 loss per share.
These results include stock based compensation expense in the six months of $759,000 compare with $808,000 in the prior year period, and depreciation and amortization expense of $1,083,000 compared with $1,130,000 in the year earlier period.
Total operating expenses decreased 4% to $6.3M from $6.6M in the previous consecutive quarter.
The increase from $5.8M in the year earlier quarter was primarily due to an 18% increase in research and development (R&D) expense.
From increased engineering headcount and material expenses related to product build (?) primarily related to HyperCloud and NVvault development.
at the 15:00 minute mark:
Sales and marketing expense increased by 8% from the previous year due to increased sampling and qualification activities, and investment of new headcount necessary to execute our vertical marketing strategy of engaging with end-user customers directly.
Administrative expense decreased 15% from the year earlier quarter.
Overall we do expect that total operating expenses may increase by 10-15% during the second half of the year, mainly driven by anticipated increases in next-generation HyperCloud and Vault engineering headcount and program (?).
at the 15:39 minute mark:
On the IP front, we continue to vigorously defend our patent rights in the U.S. Patent Office.
As noted in earlier calls, these processes will run their course and we remain comfortable in our position and confident in the validity and enforceability of our patents.
at the 15:57 minute mark:
During the quarter the court dismissed the separate cases brought against us by Inphi and Ring Technologies.
We did not record a benefit for income taxes for the second quarter ended July 2, 2011 as operating loss carry forward was fully reserved.
On a go-forward basis we anticipate a rate of 0% until we begin to utilize our fully reserved net deferred tax asset.
We ended the second quarter with cash, cash equivalents and investments and marketable securities totalling $12.1M, compared to $12.3M as of April 2, 2011.
At the end of the quarter we have unutilized availability of $4.4M on our credit line.
at the 16:43 minute mark:
During the second quarter capital expenditures totalled $134,000 compared to $184,000 in the previous year’s quarter.
We anticipate investment and equipment to support increased capacity and our new products over the next several months of approximately $500,000.
at the 17:00 minute mark:
As mentioned in previous calls, we continue to target financial breakeven by the end of the year.
However, we may still be a net user of cash during the second half, as our accounts receivable and inventory continue to expand to support the increased revenue.
at the 17:17 minute mark:
We increased our investment in inventory during the second quarter in order to prepare for a backlog of orders shipping in the third quarter.
However our target during the second half is to reduce inventory turnover to 45 days on hand.
at the 17:35 minute mark:
As you know from past calls, we have sufficient capacity on our current $15M line of credit for working capital needs.
In addition, during the second quarter we received an infusion of cash due to a $3M term loan from our bank (unintelligible – to support ?) general growth needs.
Thank you for listening in today.
Operator we are now ready for questions.
Question & Answer session ..
at the 18:15 minute mark:
Arnaub Chanda – Ross Capital Partners:
Hi, can you hear me ?
Gail Sasaki:
Yes, Arnab.
Arnaub Chanda – Ross Capital Partners:
Thanks. Thanks Gail.
The question .. I have really a couple of questions.
One is .. you know if you look at the HyperCloud .. you know design wins or customer activity, it seems like it’s taken you know .. longer than you would anticipate when you first talked about the product.
Can you talk a little bit about what the gating items are .. because it’s proprietary, or is it because you know there are other standards out there ?
Can you describe sort of what your strategy is and what the alternatives are in front of customers ?
And sort of what time frame do we think we can see you know adoption – do you need to have a second generation product. Thank you.
Chuck Hong:
Hi Arbab.
On the issue of what remains at the customers’ in order to get qualification, I mention .. a lot of product level and in-system testing for long-term reliability.
The product has taken longer to qualify.
I think in terms of the timeline from here on out, I think we are on track now to get the testing completed and qualification finished to get on the customer’s approved vendor list and to have the technology and product get adopted by the .. by the major OEMs in time for the release of the Romley platform.
at the 20:15 minute mark:
In terms of what the competitive product is .. probably the main product out there is the LRDIMM (Inphi, IDTI make buffer chips for LRDIMM) – Load-Reduced DIMM.
Which is .. a product that has, that is I believe is in the process of becoming qualified, but as we mentioned from the get-go, that product .. has we believe performance issues, such as high latency.
Along with a special facilitation in the BIOS that has to be done in order for the product to operate properly on Romley.
These are things that .. that we believe our product showed superior performance ..
at the 21:15 minute mark:
Angenie (?) – Needham & Co:
Hello.
Thank you for taking my question.
I am calling on behalf of Rich Kugele.
I just had a question on gross margins first off .. you had some commentary on the second half of this year and I missed that and I wonder if you could reiterate that ?
Gail Sasaki:
Sure.
We are .. I basically stated that we are expecting a range of 25-30% for the second half of the year.
Anthony (?) – Needham & Co:
Ok, understood.
And as far as the operating expenses go, I noticed that they are quite low on percentage of revenue basis .. just wondering if you think gross margin outperformance and strong performance as far as operating expense, is that going to be sustainable or (if you can) give any color on what to expect for the second half of the year.
at the 22:11 minute mark:
Gail Sasaki:
I did mention earlier that we do believe that operating expenses could go up by 10-15% during the second half of the year.
And I reiterate my position on the gross margins.
Anthony (?) – Needham & Co:
Understood.
That finishes me up for today.
Gail Sasaki:
Ok. Thank you.
at the 23:07 minute mark:
Thank you for your involvement with NLST and we look forward to our call in the next quarter.
Thank you.
Accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=34004&mid=34004&tof=1&frt=2
NLST Q2 2011 earnings call transcript (not exact) 15-Aug-11 04:16 pm
Summary of PR:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=34004&mid=34017&tof=1&rt=2&frt=2&off=1
Re: NLST Q2 2011 earnings call .. summary of PR 15-Aug-11 04:36 pm
http://www.netlist.com/investors/investors.html
Craig-Hallum 2nd Annual Alpha Select Conference
Thursday, October 6th at 10:40 am ET
http://wsw.com/webcast/ch/nlst/
Participants:
Gail Sasaki – NLST CFO
Chris Lopes – NLST VP Sales and Co-Founder
Gail Sasaki:
Alright. Just going to get started.
I’m Gail Sasaki and I am the CFO of Netlist and I want to introduce our speaker this morning – Chris Lopes – who is a veteran of Netlist of 11 years – been here from the beginning – helped to shape the company in many ways.
As noted, he’s an engineer .. and good business person .. (unintelligible) with the company for 7 years .. so .. thanks Chris.
at the 00:30 minute mark:
Chris Lopes:
Alright. We’ll condense about 40 minutes of material into .. 20 (minutes).
That sound like a good deal ? That’s a bargain right ? Everyone likes a bargain.
Our forward looking statements – (you) guy’s have all seen this – you’re all speed readers so that ..
So who are we ?
11 year old company. We’re a pure play into cloud computer – if you want to think of us that way – we have created $750M of sales in the last 11 years. Going public almost 5 years ago. November 2006.
We are a global company. We do our design work in Irvine, CA and San Jose (CA). That is we have a design center there – we have sales offices around the country and in Europe and Asia and we have a large factory in Suzhou, China where we build our sub-systems.
So we are a sub-systems company. What does that mean ?
Means we build a big jigsaw puzzle that goes into a big system – typically a server or storage appliance.
We deal with tier-1 customers. HP, IBM, DELL, VMC, Cisco, NetApp, FFIV – these are marquee customers – it’s a fairly consolidated market for us. And it takes a long time to get involved in (with) each of these customers.
You can imagine the qualification requirements and investment on their end of resources.
There are substantial barriers to getting involved with any of these companies and we’ve succeeded.
at the 01:45 minute mark:
Now, we have a couple of products that we will highlight today – really some game changing products – one for server, one for storage area.
The first is called HyperCloud – that’s a DRAM based product.
And the second is our NVvault, which is a combination of flash and DRAM.
And we’ve got about 60 plus patents going.
So if you look at cloud computing, you are seeing a lot of news on this – obviously iCloud (AAPL) is going to become much more prevalent, NetFlix now working out of cloud and of course now enterprises are now trying to figure out how not to spend a ton of money themselves and how to plug and pay for a service.
at the 02:30 minute mark:
We’ll focus on a couple of these areas – these are all driving high density architectures in the server space.
And cloud server units are growing at about 20% a year (for the) next couple of years.
at the 02:40 minute mark:
So if you look at the market that WE play in – really 2 areas – the storage side which has a lot to do with flash-based and RAID control memory .. sorry NVvault. HyperCloud plays a little bit there and some battery-backed – we’ll talk a little bit about that towards the end of this presentation and we’ll focus the first part now on the larger market which is our HyperCloud and that’s a $4.3B market and growing.
So we’ve got a pretty large market to play in.
And if you look at performance in a server. There is a lot of talk now about tiered storage – lot of activity has gone on into the SSD space and PCI-SSD and you can see the access times.
at the 03:25 minute mark:
So think about it this way – a standard hard drive operates in access times of milliseconds. You can get about a thousand times faster by going to an SSD. And you get another thousand times faster again by keeping your memory in DRAM. And so we try to do a lot of work for our customers into the DRAM space to get the maximum performance.
at the 03:35 minute mark:
Talking about a couple of quick examples – MSC Software makes a product called NASTRAN. So it is finite-element modeling, computational fluid dynamics (CFD).
So if you are building something big that moves, that has airflow or waterflow, you need to analyze how it’s going to operate – so in this example, MSC ran some very very large models and discovered that when they load the servers they use with our memory, they run 21% faster.
So what does 21% mean ? That means it – does it mean it is worth a 21% premium ?
Well no, it is worth a lot more than that in this case. In fact you’ll have an engineer – let’s say an aeronautical engineer running a model – typically took a day to run.
So he loads his information, he runs it, he comes back tomorrow – he gets some results and he makes some decisions about design changes.
at the 4:35 minute mark:
When they run it with our memory, they run that same model – they get an answer the same day, they decide to modify and try another case and start that before they leave for the day. So when they come home they’ve effective.. when they come back in the office they’ve effectively doubled their workflow. They now have two models to analyze in the same time.
at the 4:45 minute mark:
This allows them to do things that I consider pretty important since I fly a lot.
So for example would you rather get on an airplane that had been fully simulated or one guy simulates the first half and the other guy simulated the back half.
Right so .. the ability to load very complex models into one system into large DRAM and run them makes better product.
So that is a high performance computing application.
at the 05:10 minute mark:
One that might be a little closer to home for this audience – in the financial services industry. Imagine loading all your trade data – your tick database – right into RAM, so you can analyze it real-time and make decisions for trading.
And that’s what large amounts of RAM do, especially high-speed RAM.
So we are doing this in .. risk analytics. Example: what happens if there is another earthquake in Japan – how does it affect a particular stock .. how would I model that, how would I make fast decisions for doing that.
So we have companies on Wall Street who are looking at using LARGE amounts of memory to enable this kind of activity.
at the 05:55 minute mark:
There is a fairly large demand – some call it “insatiable” – in the end-customer space. And there are a couple of key technologies that drive this.
One you are very familiar with is multi-core technology that companies like INTC and AMD are producing.
So as you go from Nehalem to Westmere to Romley and we go from 2 and 4 to 8 cores per CPU and on the AMD side – Magny-Cours and Interlagos now coming out with 16 cores.
Every core increase benefits from more memory – per core. Which means the system itself needs more memory to hold that.
So that is a big big driver.
at the 06:30 minute mark:
You have seen virtualization with the things that VMware is doing. It is putting 15 users on a machine and going to 20 and 25.
And if you go to a lot of stock trading desks you can see the screens and they are all virtual desktops managed by a server in the background.
at the 06:45 minute mark:
And then cloud computing requirements today – people trying to do more in the cloud as you get more comfortable with it, they are running larger jobs.
You can see things like airplane simulation being done in the cloud .. instead of your own machine at some point in the future.
And companies like AMZN are working on empowering and developing the hardware and making that available.
That is a big market for them.
So the elasticity in being able to move and repartition some memory per user in the cloud is very important. Having a large DRAM space gives that flexibility.
at the 07:15 minute mark:
On the supply side there are some very big holes that need filling.
One is silicon itself in the DRAM has a very difficult time migrating to next-generation technologies.
at the 07:30 minute mark:
The physics of DRAM preventing a fast scaling.
It looks like 8Gbit DRAM maybe the lowest or the last monolithic die today.
Today 4Gbit (not gigabytes) just hit the market. And it is an estimated $25B investment needed in the DRAM industry to get to the final lithographies needed to get to 8Gbit (DRAM) cost-effectively.
It is really .. Samsung’s probably the only .. only player with the pockets to do that. ‘Cause they’re making money on Galaxy Tabs (Android tablet computer) and everything else seems like .. today.
at the 08:00 minute mark:
So the industry says we still need a solution. INTC’s got a problem, HP’s got a problem, IBM, AMD – all these big guys rely on large amounts of memory being available so that their servers can get to market and do what they’re supposed to do.
at the 08:05 minute mark:
But a couple of the alternate technologies today – one is our HyperCloud product.
HyperCloud is a load-reduced and rank-multiplied technology.
We’ll go into that in just a second.
So there is several different pushes now in the industry – using that technology and 3DS is a 3-dimensional stacking of taking 4Gbit dies and stacking them 4-high and doing “through-silicon via” to connect those.
That’s a great technology – hasn’t been perfected yet, but it still has loading and speed issues related to what we feel our technology can help overcome.
And you are seeing SSDs increasingly being used to offload some of the memory and reduce some of that bottleneck for hard drives.
at the 08:45 minute mark:
Let’s look at the HyperCloud product.
This is a 5.5 inch memory stick – if you have ever upgraded your memory yourself.
In a desktop computer it looks very similar – same size as a socket it fits into.
Now in a server there are 24 of these sockets that can be filled.
So one server could hold, you know, $18-20,000 of HyperCloud memory.
Right ? So it is a .. we’ll just take the cover off of it – that was a heat-spreader there.
at the 09:25 minute mark:
We make 2 custom pieces of silicon.
And we spend a particularly large amount of R&D (research and development) dollars designing these chips.
The first is a register device that ranks .. that multiplies the ranks available.
So the system thinks it has 2 ranks to talk to memory.
We can actually make a 4 rank memory look like 2 ranks – effectively doubling the amount of DRAM on any one DIMM.
That gives us a cost advantage in some cases and certainly a performance advantage in most (cases).
at the 09:35 minute mark:
But without the isolation devices – there are 9 of those along the edge – that memory would slow the whole bus down .. to an unacceptable speed.
So we need to compensate for the capacitive loading of all the additional chips by buffering it and isolating it from the system, which allows us to run these very large memory .. very fast.
And that gives us the maximum speed of 1333MHz and .. think about this 3/4 of a Terabyte .. 768GB (gigabyte) in one server.
So you can do a lot of work with that kind of .. data in RAM and not having to do disk access to go grab some models.
at the 10:25 minute mark:
If you are in oil and gas, (an) analysis company for example, you can load the full oil well into RAM and now analyze it.
You know, I am told they can spend about a $100,000 a minute in analysis of whether they should keep drilling or not.
So do you want to be the guy that can tell them in 20 minutes or in 2 minutes whether or not there is more oil to go there.
So having large amounts of RAM really impacts what you can do.
at the 10:50 minute mark:
(We are) making this available in 16GB (gigabyte) and 32GB DIMM densities, which is the largest in the industry today.
at the 11:00 minute mark:
Our customers – you can see a couple of them here – HP .. increased server bandwidth capacity to enhance performance. SuperMicro .. unprecedented levels of performance. Viglen .. improved simulation times .. all about performance. No one wants to spend more money unless they get something for it. They get a lot for this. So we are seeing good play.
at the 11:20 minute mark:
Now, industry’s moving forward .. it always does .. DDR interface today is DDR3, we will go to DDR4 in about 2.5 years.
Industry committees JEDEC (Joint Electronic Device Engineering Committee) of which we are part of, is already working on what are the interface standards for processors to talk to memory going forward.
That’s called DDR4.
There are several changes – lower voltages, higher speeds. And with speeds come loading problems and buffer problems .. buffer solutions are needed.
at the 11:50 minute mark:
On the top you can see what the industry is now pushing for DDR4 – it’s called the “distributed buffer architecture”.
And below that you have what we have today, which happens to be called “a distributed architecture”.
And so the HyperCloud distributed architecture is already a generation ahead of the rest of the industry.
There are lots of patents covering this.
There is a lot of interface between the register and the buffer chips.
Took us a long time to work out – many years of fine tuning to get that done.
So we feel we are very well positioned to carry this technology through DDR3 for the next couple of years and onto DDR4, where the market is REALLY projected to grow significantly in volume.
at the 12:30 minute mark:
So we have been doing this since 2004 – we stared work with AAPL on a rank-multiplied solution to solve a problem in their Xserve.
And we came across a lot of need for innovation doing that and besides (we) filed some patents along the way – and that was back at DDR1.
We did it for DDR2. We are doing it for DDR3. We’ll do it for DDR4.
So across multiple channels, our multiple technologies .. we were able to solve these problems.
Problems get more difficult .. the speed goes up .. the voltage levels go down .. you really need to know what you are doing in this space.
at the 13:00 minute mark:
We have 17 granted patents in this area alone.
Another 30 in flight (?).
So this is an area we guard very well – a lot of know-how as well as patents related to this.
at the 13:15 minute mark:
Let’s shift over now to the storage side.
You’ve seen a lot of info in the market on SSDs – there’s over a hundred SSD manufacturers today.
We make solid-state products that do several different functions.
First one is – backup in RAID systems.
So we started doing this work years and years ago when we had batteries backing up the RAID.
And so this little card here is a cache memory for a RAID system – that’s a DDR2. That’s a 512MB or 1GB version.
at the 13:45 minute mark:
And we discovered that our customers don’t like batteries.
In fact, batteries wear out. So how do we get rid of the batteries.
We figured (out) a way to do that – mirroring flash and DRAM together with proprietary software or firmware to control that and a “supercapacitor” that holds it up to make the transition.
at the 14:00 minute mark:
So imagine you are working (on) your system and the power goes out in your building.
You are plugged into the wall. You just lost whatever you were working on, right ?
Not if you have a product like this in your system.
at the 14:10 minute mark:
Because it caches it and upon power-down, it takes whatever is in your RAM and moves it over into flash.
Once it’s in flash, it doesn’t matter how much .. when you get power – it could be 10 years.
But you’ll have the data.
And we have enough power in that little pack (the “supercapacitor”) to transfer it over in about a minute .. is what it takes. So transfer’s over.
at the 14:30 minute mark:
Now it is not that important if you are working on a powerpoint or a spreadsheet, but if you are caching important data to a hard drive as a server, it’s EXTREMELY important that you have that protected.
So this is a very big seller for us.
And our customer said “well I don’t have a RAID system, but I certainly .. sure want that kind of application – so what can you do to make that available ?”.
We did that with a product called ExpressVault – we built a complete card where we make an interface to the PCI Express (slot) – plugs right in to a standard system now – our card goes right on there, so it’s really an adapter card. That lets everybody use this function now – if they want.
at the 15:10 minute mark:
And that’s at DDR2 and customers said “well that’s great, but I want to go to DDR3”, so we made a DDR3 module.
In this case we analyzed and figured out, if we can work directly on the memory bus, instead of through the PCI Express bus, we can get a tremendous throughput advantage.
at the 15:20 minute mark:
And so our customers said “yeah, that’s great, you better work with CPU manufacturers now”, so we’re doing that.
The CPU guys and us are working together to enable this product to plug right into a memory socket and give you that instant backup capability.
And that’s a combination of DRAM and flash (memory).
You need the DRAM for the speed and you need the flash for the non-volatility, but you gotta have a way to move one to the other very quickly.
And that’s proprietary and we do that very well.
at the 15:45 minute mark:
The company has shown 10 consecutive quarters of gross profit growth.
Chart may not show it very well – the blue is revenue, and our margins are right now a little above 30% on continuing growth of revenue.
So we’ve got a nice product mix that has a high margin.
And nice track record for last 10 quarters.
at the 16:05 minute mark:
Our steady-state model says you take a 30% gross profit business, you spend about 15% of that in OpEx (operational expenditure) and you’ve got 15% for the bottom line.
So we are moving towards that .. very soon .. we are moving towards breakeven here (some point on chart ?) this year.
And we’re excited about where that goes next year as that whole HyperCloud 32GB (memory module) really takes off.
at the 16:30 minute mark:
Takeaways for you today:
Customers – we deal with top-tier customers .. these are marquee names that are moving into cloud computing in a big way, or already leaders in storage or cloud computing servers.
The trends in the server space – requiring more memory with multi-cores.
Increased use of very sophisticated software, analytics, trade .. trading data as we talked about.
Along with the .. not hesitancy, but the .. inability of the standard DRAM industry to meet those needs with large amounts of silicon, create quite an opportunity for us.
We have strong IP position along high-density and load-reduction – so a lot of competitive barriers there.
We’ve got some very interesting products related to flash and DRAM together – either boot-up, instant-save, constant-save, RAID-caching, as well as the HyperCloud high-density high-speed high-frequency, with low-latency, and we’ve got a team that’s been together for a (unintelligible) amount of time.
Founders are still very active in the company – 11 years now.
Most of our executive team’s been together for over 5 years, 7 and 8 years. So there is a pretty well established proven track record and there is significant management ownership in the company, still. So there is a lot of care.
Open for questions.
Yes.
Question and Answer session:
at the 17:50 minute mark:
Question:
(unintelligible)
Chris Lopes:
Well, the question is .. can we talk about design wins on the Romley platform.
I can’t yet tell you who we’re qualified on.
Romley has not been released yet. Won’t be till like .. looks like Q1 (2012).
But I could tell you that we’re working with very large companies who build products and they are working on Romley qualifications.
Our product performs very well with Romley.
We have several companies – early platform, in our labs, validating that, as well as our own product at the labs of our customers where they are doing their own qualifications and validations today.
Question
at the 18:30 minute mark:
(unintelligible)
Chris Lopes:
Yes. Every new process or family requires a re-qual (re-qualification) as well as every new density.
So at Westmere, the highest density was 16GB.
You get Romley, we are really talking 16GB and 32GB.
32GB (memory modules) are just being released.
So yeah .. our customers will have those to finish qualifications for Romley.
Question:
at the 19:00 minute mark:
(unintelligible)
Chris Lopes:
Right.
Question:
(unintelligible)
Chris Lopes:
So the question is .. LRDIMM adoption .. the web 2.0 companies.
LRDIMM is designed to work around (?) Romley.
Requires a special BIOS – which evidently is not yet completed .. according to my customer sources.
There is a special BIOS on Westmere that was kind of experimental to try to get early adoption – I don’t know anyone that’s shipping that.
LRDIMM is really a next-generation product as well.
I don’t believe that .. and I don’t have visibility complete visibility into everything those companies are doing.
But it doesn’t seem it would make sense for them to use the Westmere for that.
Any other questions.
Ok, I thank you for your attention today ..
Yes .. (another questioner emerges)
Question:
at the 20:05 minute mark
(unintelligible)
Chris Lopes:
We’ve already modelled in a Q1 (2012) launch of Romley .. in our financials.
So .. if it pushes beyond Q1 (2012) it will have, you know, impact to our growth, but our existing business (is) very steady .. steady-state .. not related to Westmere or Romley launches. It’s really where we grow .. in some of the new products.
Especially the 32GB (HyperCloud memory modules).
Yes, sir ..
Question:
at the 20:45 minute mark:
(unintelligible)
Chris Lopes:
The question is .. are we as (unintelligible) on the storage side as we are on the server side with DRAM.
Uh, the answer’s yes.
Very limited competitive positioning from anyone else in this.
Because it’s a mixed technology on the storage side .. with DRAM and flash.
So just a few companies are working this space – mostly module sub-system manufacturers.
And since we have such a good reach with large OEMs that we’ve been through – 4 and 5 year engagements to get through the quality and you know support requirements needed to do business with them.
We have a big advantage because we are IN the customer and if that customer needs that product.
The other companies that are trying to do that space really have never done business with many of these OEMs.
Question:
at the 21:40 minute mark:
(unintelligible)
Chris Lopes:
We do, we make an mSATA product and a PCIe (PCI Express) product right now up to 128GB.
These are embedded solid-state drives – they are more for industrial or for things like server boot-up.
Since we are already working with large server guys this is already a pretty reach for us – where the competition there are people that are never heard of.
We are not in the commodity consumer space for SSD – that’s where I mentioned there are a 100 companies doing that.
There are some interesting companies out there – technologies that I think you need to .. you probably need your own controller to do that well.
And to have a differentiated space.
We are partnering with some controller companies today.
And really finding some niches there .. as opposed to going after mainstream.
at the 22:40 minute mark:
So there is .. in the flash area you can look at .. we can make a lot of standard commodity SSDs (?) in (unintelligible) ..
We make the NetVault NVvault product battery-backed replacement. We make that product available in the standard memory and also do some of the embedded stuff for mSATA interface as well as PCI.
Yes, sir ..
Question:
at the 23:05 minute mark:
(unintelligible)
Chris Lopes:
Well, we started (unintelligible) as a public .. public lawsuit that we have with GOOG, around violating our IP.
So that is still pending and it’s been through many revisions and lots of lawyers and judges are involved in that.
Other than that I don’t have a concern .. but I don’t have complete knowledge in what they are doing there.
Question:
at the 23:35 minute mark:
(unintelligible)
Chris Lopes:
Inphi (IPHI). Good question. How is HyperCloud different from what IPHI is offering.
IPHI is a chip company – so they build a register.
The register is then sold to a memory company.
And the memory company builds a sub-system with that.
And that’s the module they are calling an LRDIMM or Load-Reduced DIMM.
The difference is that the chip is one very large chip, whereas we have a distributed buffer architecture, so we have 9 buffers and one register.
Our register fits in the same normal footprint of a standard register, so no architectural changes are needed there.
at the 24:35 minute mark:
And our distributed buffers allow for a 4 clock latency improvement over the LRDIMM.
So the LRDIMM doubles the memory. HyperCloud doubles the memory.
LRDIMM slows down .. the bus. HyperCloud speeds up the bus.
So you get ours plugged in without any special BIOS requirement.
So it plugs into a Westmere, plugs into a Romley, operates just like a register DIMM which is a standard memory interface that everyone of the server OEMs is using.
The LRDIMM requires a special BIOS, special software firmware from the processor company to interface to it.
And it’s slower.
Does that answer your question ?
Question:
at the 25:20 minute mark:
(unintelligible)
Chris Lopes:
Yes.
You could look at it from an investment standpoint of let’s say there is 20M units of opportunity next year for HyperCloud or Load-Reduction DIMM (LRDIMM).
Inphi is selling a chip into each one of those DIMMs for I don’t know $5-10 something like that.
We are selling a module $100-200 to a $1000 depending on the density.
So we (unintelligible) that’s why the sub-system space is very (laughs) exciting.
We leverage the full bill of materials as well as we have to handle all of the interface issues that come up.
If you think about it – I’ve used this analogy before .. most system manufacturers want to put together a puzzle with 5 big pieces of the jigsaw, not a 100.
They don’t have time.
To be one chip and then to rely on someone else to then put it together into a bigger piece and then rely on them to sell it and interface it is a long reach.
We figure let’s build the bigger piece and make sure it fits right into our customer.
Yes, sir ..
Question:
at the 26:30 minute mark:
(unintelligible)
Chris Lopes:
Sure, from a competitive standpoint for HyperCloud, there’s really only two ways that we know today to get to the higher density.
One is you stack DRAM and you slow the bus down to talk to that. As long as you can overcome the rank limitation.
So .. so IPHI and I think there are one or two other companies (IDTI ?) trying to build the interface chips to do the load-reduction.
But I think IPHI is the only one out in the market today .. is the primary guy out there.
In terms of just making larger RDIMMs (registered DIMMs), standard RDIMMs, you look at the silicon companies themselves like Samsung, Micron and Hynix and when they will have 8Gbit technology available to build a standard RDIMM to then do what our product does with the 4Gbit technology.
And some analysts are telling us that’s 2.5 to never in years (laughs) to when that happens.
And they’ve got some challenges in doing that – besides the lithography of getting to 10nm, there is an interface change from DDR3 to DDR4.
So how much money do you put into a DDR3 version of an 8Gbit (DRAM) if that market is going to shift to a new transit, new speed and new interface voltages, RIGHT when your chip will be available.
at the 28:05 minute mark:
So that would be kinda Samsung’s problem. Everybody else has just introduced 4Gbit and they are on a 2.5 to 3 year cycle for density.
Even if they could, if they could overcome the technology challenges, TIME to get to 8Gbit is about a 2.5 year window.
So we think we are very well positioned there.
I think in the 16GB (16 gigabyte memory modules) we did not have this advantage.
Because 4Gbit chips (DRAM) when you have plenty of 4Gbit chips – so they can get down in price to obviate the need for 2Gbit rank-doubled.
So that cross-over is starting to happen already.
We don’t see that cross-over happening again – at least for 2.5 years .. if ever (meaning newer higher density chips won’t become too cheap – in fact won’t even be available for 2.5 years).
It IS a more exciting story today than it was when we introduced the product several years ago because of that.
Yes, sir ..
Question:
at the 28:55 minute mark:
(unintelligible)
Chris Lopes:
The question is will people accept slower speeds (i.e. mean LRDIMMs) for some other reasons.
Sure. Applications that are NOT speed sensitive.
So let’s say I need large amounts of density. I will sacrifice speed for the larger density.
We’re not focused on that market.
So I think there is a place for both of us .. to coexist.
You know there are also areas where some servers don’t have as many sockets – so the loading isn’t an issue for them – they just want the largest density possible for that particular socket.
And they don’t have a lot of sockets because there are space constraints and LRDIMM may work fine in those areas.
Again, not a market that we are counting as part of our camp (?).
I don’t think there is anything that the Load-Reduction DIMM (LRDIMM) does better than the HyperCloud.
But I don’t know everything about it. There maybe maybe something that they can come out with soon (?).
Those are great questions. Any other questions.
Alright, I would like to thank you for your attention today.
A pleasure speaking with you.
Accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=35832&mid=35832&tof=1&frt=2
Oct 6, 2011 – Craig-Hallum CC – transcript (not exact) 1 minute ago
Interesting and informative conference call, Netlist.
I think it also explains some of the recent hardware patent purchases that Google has made recently as well, especially the section on Flash memory.
Thanks.
The NLST/GOOG legal fight is stuck at USPTO pending reexaminations etc.
Same for NLST/Inphi. However Inphi has withdrawn Inphi vs. NLST (I suspect because court docs suggest NLST was challenging validity of two Inphi patents which might be a possible case of double patenting – which could weaken or invalidate two Inphi patents). Meanwhile, NLST vs. Inphi is still ongoing (again pending reexaminations).
But NLST is making some progress on HyperCloud, but it’s main progress it seems has been in the storage space with their battery and supercapacitor backed memory-to-flash. Currently of use in RAID systems and storage stuff, but possibly could become mainstream (imagine a computer that doesn’t lose it’s info if power plug is pulled).
Hi netlist,
I know the Google litigation has stalled at this point, but I’m still a little staggered by the initial acquisition that I wrote about in this post with Google buying all the patents they could from Metaram. As I’ve noted in my comment a few days ago, Google has been acquiring a lot of new patents from IBM, from Quantum, and others that are more hardware based. Interestingly, a few of those have been in the area of flash memory, like you describe, with the ability to backup data in memory to a flash drive to save it if there is a power loss. I wouldn’t be surprised if Google started looking towards installing flash memory in the machines that run its data centers.
Flash storage and flash as cache between RAM and hard disks. Some of NLST’s NVvault products are used in such stuff (like LSI’s CacheCade). NVvault has flash on-board the memory module which allows saving of RAM contents to flash in event of power failure (similar to battery-backed memory modules – but NVvault has a “supercapacitor” option so one can avoid using a battery).
Regarding direct use of RAM, this is an interesting article someone posted on NLST yahoo board:
http://www.wired.com/cloudline/2011/10/ramcloud
The Quest for the Holy Grail of Storage … RAM Cloud
Jon Stokes
posted in Blog, Featured · 6:30 am
Thanks for the link to that article, netlist.
I’m going to have to revisit the new Google patents on flash memory that they acquired and see exactly what they cover, but I think that those are shared goals they have as well.
Does anybody see how this news effects the GOOG & Inphi lawsuits?
http://finance.yahoo.com/news/Netlist-Receives-Notices-of-prnews-3188386243.html?x=0&.v=1
I don’t see the link but the tone Hong tries to put across makes it sound like he thinks NLST is locking up the loose ends surrounding their IP.
NLST Q3 2011 earnings call transcript (not exact)
http://www.netlist.com/investors/investors.html
http://viavid.net/dce.aspx?sid=00008EC4
Netlist Third Quarter, Nine-Month Results Conference Call
Thursday, November 10, 2011 at 5:00 PM ET
Participants:
Chuck Hong – NLST CEO
Gail Sasaki – NLST CFO
Jill Bertotti – Allen & Caron (Investor Relations firm for NLST)
Questions:
Rich Kugele at Needham
Jeff Martin of Rodd (?) Capital Partners
Jill Bertotti:
.. with that I would not like to turn the call over to Chuck.
Good afternoon Chuck.
at the 02:30 minute mark:
Chuck Hong:
Thanks Jill, and thank you all for joining us today.
In the third quarter (Q3 2011) we continued the trend of improving financial performance by steady execution on our base business which includes the Vault family of products, flash and specialty DIMMs.
We reported revenues of $16.3M up 55% over last year.
And delivered an adjusted EBITDA breakeven for the quarter.
We expanded our gross profits, and decreased our net loss by 79% year on year.
Performance was driven by growth in the Vault product line, where we had outstanding revenue growth of 43% over the prior year.
And by flash and SSD growth of 226%.
We shipped $1M of HyperCloud products during the quarter.
Our best quarter to date.
at the 03:25 minute mark:
During the quarter our 8GB and 16GB HyperCloud (memory) modules were qualified on Gigabyte’s high density server motherboard.
Gigabyte is one of the top manufacturers of server motherboards and other computing hardware.
As an example of the real world application and value-add of our technology, integrating HyperCloud on one of Gigabyte’s advanced server motherboards, enables 288GB of memory capacity running at 1333 mega transfers per secong (1333MT/s – often called “1333MHz” in discussions here).
at the 04:00 minute mark:
We also teamed up with Swift Engineering, the leading provider of high performance simulations for design in aircraft and racecars.
Our HyperCloud 16GB memory modules made this possible, where Swift had run into technical limitations in the past.
Swift has already published use cases and white papers showcasing HyperCloud’s advantages in computational fluid dynamics (CFD) simulations for aerodynamic design.
While Swift is just one customer, it is a thought leader in the field, and as a result we are receiving inquiries from other firms that conduct complex simulation work.
at the 04:40 minute mark:
Also earlier today, we announced that we are running validations that are showing HyperCloud’s significant performance advantages for large data analytics workloads, when compared to industry standard memory on a standard 2-processor server running Sybase IQ, a financial services database.
Those validations are showing that HyperCloud is delivering performance gains of up to 90% in equity trading applications, and delivers real profit opportunities to financial services firms.
at the 05:15 minute mark:
The qualification at Gigabyte and validation on Swift and Sybase applications are among the latest in a series of demonstrations on the benefits of HyperCloud over standard server memory.
Starting earlier next .. uh .. early next year, these benefits will be brought to the mainstream server market with the release of next-generation servers based on Intel’s Romley processor.
We expect HyperCloud to start shipping in volume with the launch of these servers by the major OEMs.
at the 05:50 minute mark:
While it is still early, we believe that due to the customer benefits, which we have articulated here and over the past year, HyperCloud will catch an unfair share of the market for high-end server memory.
at the 06:00 minute mark:
As we get closer to the launch of the new servers, we will be able to quantify, with more granularity, the scope of the volume ramp of HyperCloud in 2012.
For now, suffice it to say that volumes from the mainstream server OEMs will be substantially larger than what we are seeing today on Westmere systems.
And that we expect HyperCloud to drive a significant portion of our growing top line in 2012.
at the 06:30 minute mark:
Outside of the potential short term financial impact, I believe it is important to provide a perspective on what HyperCloud has accomplished since it was industry two years ago.
Unlike I/O or other peripheral devices, memory, along with the CPU sits at the heart of the server and is therefore critical to the performance and reliability of the server.
So it would be natural that server designers would be hesitant to experiment with a unique proprietary memory technology.
In fact, in the history of servers, outside of a few in-house solutions, there has never been a proprietary memory technology that has been widely adopted by the mainstream server market.
HyperCloud is the first.
Once adopted, by the mainstream starting with the Romley-based systems, early next year, and deployed in a variety of applications by end-customers, it is a good bet that the technology will be supported by the OEMs for many years to come, and eventually become a permanent fixture in the server ecosystem.
at the 07:40 minute mark:
The JEDEC proposal earlier this year to use NLST’s patented distributed architecture for server memory at DDR4 is a clear indication that HyperCloud is the right technology path for server memory for years to come.
at the 07:55 minute mark:
In anticipation of that longer term vision, NLST continues to invest in R&D (research and development) for the next generation HyperCloud, working with OEM and silicon partners to create the highest performance server memory design in the world.
at the 08:10 minute mark:
In addition, we have continued to create patents and know-how in order to maintain our significant technological lead in the area of “rank multiplied, load reduced memory architecture”.
at the 08:25 minute mark:
In recent months, we have seen a series of positive developments that protect and extend our IP that surrounds HyperCloud.
Including the recent receipt of our 7th patent in this area this year.
at the 08:40 minute mark:
The enormous potential of HyperCloud and it’s impact to NLST’s business, as well as for the rest of the industry, will be displayed and communicated at the annual Supercomputing conference SC’11 in Seattle next week.
At the industry’s top venue, we plan to announce the launch of a number of key programs, and demonstrate breakthrough technologies.
While I am unable to speak to these much .. to these in much detail today, the anticipated announcements will include a powerful new product platform, and landmark technology partnerships with industry leaders.
We hope you will stay tuned in the coming days as we will (rollout ?) these programs at SC’11.
at the 09:30 minute mark:
Gail will now provide you a more detailed financial update for the quarter, as well as a high-level discussion about 2012.
Gail ?
Gail Sasaki:
at the 09:40 minute mark:
Thanks Chuck.
Revenues for the third quarter ended October 1, 2011 (Q3 2011), were $16.3 up 55% when compared to $10.6M for the third quarter ended October 2, 2010 (Q3 2010).
And a slight sequential increase over Q2 2011 of 2%.
This flattish revenue between quarters was due to shortfall of flash and specialty DIMMs, some of which will get shipped in Q4 2011.
at the 10:05 minute mark:
Gross profit for the third quarter ended October 1, 2011 (Q3 2011) was $5.5M or 34% of revenues, compared to a gross profit of $3.0M or 29% of revenues for the third quarter ended October 2, 2010 (Q3 2010), an increase in gross profit dollars of 83% and a sequential increase of 12%.
at the 10:30 minute mark:
The quarter over quarter improvement was due to the 55% increase in revenue, a favorable DRAM cost environment, as well as the increased absorption of manufacturing costs as we produced 21% more units than the year earlier quarter.
at the 10:45 minute mark:
We expect our gross profit to range from 30% to 35% during the fourth quarter (Q4 2011) this year.
Which will be dependent on the quarter’s product mix, production volume and DRAM cost.
at the 10:55 minute mark:
Adjusted EBITDA after adding back net interest expense, income taxes, depreciation and stock based compensation and net non-operating expense was $32,000 for the third quarter ended October 1, 2011 (Q3, 2011) compared to an adjusted EBITDA loss of $4.0M for the prior year period.
at the 11:05 minute mark:
Net loss in the third quarter ended October 1, 2011 (Q3 2011) was $1.0M or $0.04 loss per share, compared to a net loss in the prior year of $4.9M or a $0.20 loss per share.
at the 11:30 minute mark:
These results include stock based compensation in the third quarter of $464,000 compared with $413,000 in the prior year period.
And a depreciation and amortization expense of $534,000 in the most recent quarter, compared with $561,000 in the year earlier period.
Revenues for the nine months ended October 1, 2011 (Q3 2011) were $44.3M, up 60% from revenues of $27.8M for the prior year period.
at the 12:00 minute mark:
Gross profit for the nine months ended October 1, 2011 (Q3 2011) was $14.3M or 32% of revenue, compared to a gross profit of $6.7M or 24% of revenue for the nine months ended October 2, 2010 (Q3 2010).
at the 12:20 minute mark:
Adjusted EBITDA loss, after adding back net interest expense, income taxes, depreciation, stock based compensation and net non-operating expense was $2.2M for this first nine months ended October 1, 2011 (Q3 2011), compared to an adjusted EBITDA loss of $9.8M for the prior year period.
at the 12:40 minute mark:
Net loss for the nine months ended October 1, 2011 (Q3 2011) was $5.4M or $0.22 loss per share, compared to a net loss in the prior year period of $11.9M or a $0.51 loss per share.
These results include stock based compensation expense of $1.2M for both periods, and depreciation and amortization expense of $1.7M for both periods.
at the 13:05 minute mark:
Total operating expenses were flattish at $6.5M (Q3 2011) from $6.3M in the previous consecutive quarter (Q2 2011).
While we had expected this to increase for the second half of the year, we have been able to bring the development cost that we had planned for next-generation products below budget as our engineering team completed work ahead of schedule with less external resources.
at the 13:30 minute mark:
The decrease from $7.9M in the year earlier quarter (Q3 2010) was primarily due to a 20% decrease in research and development (R&D) expense related to an absence in 2011 of non-recurring engineering costs incurred in 2010 in association with next-generation introduction.
at the 13:50 minute mark:
Sales and marketing expenses decreased by 19% from the previous year due to improved efficiency and lower sample costs.
at the 13:55 minute mark:
Administration expense decreased 10% from the year earlier quarter.
at the 14:00 minute mark:
Overall we expect that total operating expenses will be slightly lower during the fourth quarter of the year.
at the 14:10 minute mark:
On the IP front we continue to vigorously defend our patent right in the USPTO.
In October (2011), we received positive news via an office action in the ‘912 reexam, allowing 10 broad original claims and 10 new claims.
As noted in earlier calls, these processes will run their course and we remain comfortable in our position and confident in the validity and enforceability of our patents.
at the 14:35 minute mark:
We did not record a benefit for income taxes for the third quarter ended October 1, 2011, plus operating loss carry forwards generated were fully reserved.
at the 14:45 minute mark:
On a go-forward basis, we anticipate a rate of zero percent until we begin to utilize our fully reserved net-deferred tax assets
at the 14:55 minute mark:
We ended the third quarter with cash, cash equivalents and investments in marketable securities totalling $10.6M compared to $12.1M as of July 2, 2011.
At the end of the quarter, we have unutilized availability of $1.5M on our credit line.
During the third quarter (Q3 2011), capital expenditures totalled $326,000 compare to the same number in the previous year’s quarter (Q2 2011).
at the 15:20 minute mark:
We anticipate investment in equipment to support increased capacity and our new products over the next several months of approximately $500,000.
at the 15:25 minute mark:
We were able to reach an adjusted EBITDA breakeven this quarter.
However we may still be a net user of cash the next couple of quarters, depending on our cash cycle which increased by 12 days from Q2 2011.
As you know from past calls, we believe we have sufficient capacity on our current $15M line of credit for working capital needs.
at the 15:50 minute mark:
At the end of the quarter, we filed a new S-3, that puts in place the shelf offering that COULD allow us to sell up to $40M in securities.
In the event we utilize this shelf offering, it would be to fund anticipated HyperCloud ramp, and next-generation HyperCloud NVvault R&D (research and development), as well as to accelerate the commercialization of current products.
at the 16:10 minute mark:
Since our next call will not take place until the new year, we would like to wrap up our prepared remarks with some high level guidance for 2012.
With the introduction and qualification of new products in the remainder of 2011, and throughout next year, we believe an increase of revenues of 50% to a 100% will be realizable for 2012.
With the majority of that growth weighted towards the second half.
at the 16:40 minute mark:
In addition, we anticipate crossing over into GAAP profitability during Q2 2012 while we continue to invest aggressively in next-generation R&D.
Thank you very much for listening in today.
Operator we are now ready for questions.
Question & Answer session ..
at the 17:30 minute mark:
Rich Kugele at Needham:
Good afternoon, Chuck and Gail.
Uh .. can you hear me ok ?
Chuck Hong:
Yeah, hi Rich.
Gail Sasaki:
Hi Rich.
at the 17:35 minute mark:
Rich Kugele at Needham:
Hi .. um .. so uh uh uh a few questions and .. uh .. and I apologize for being slightly out of order here, because Gail you just said some pretty meaningful things.
But um .. I’ll start actually with what I had prepared.
Uhm .. you know I just want to talk a little bit about the LRDIMM market as it relates to Romley.
And uh .. or the next-gen from Intel.
Uhm .. can you just talk about how big that market is – I know that in recent months we’ve seen a few competitors actually exit that ..
(NOTE: hey! wait a second – this is what we have been discussing i.e. as I have mentioned that IDTI deemphasizing over 3 consecutive conference calls and their earlier mention of TXN (Texas Instruments) being “not interested” in this space which might have links to NLST vs. TXN which was supposedly settled favorably to NLST)
.. space .. uh .. from TI (Texas Instruments) and IDTI .. uh .. just outright ..
(NOTE: voice almost breaking – their “analysis” been seriously flawed ignoring NLST – though it is to his credit that he is openly coming out on this though eventually had to)
.. difficult time figuring out how many units that market actually is and how competitive the solutions are or aren’t.
Um .. then I have some followups.
at the 18:20 minute mark:
Chuck Hong:
Yeah Rich, I think there has been a handful of reports .. um .. that have been written up .. about the .. uh .. LRDIMM market.
Uh .. that marketplace is the exact .. target market .. uh .. for HyperCloud.
Uh .. that we’re targeting.
Um .. there’s probably anywhere between 70M and 80M registered DIMM or server memory modules being shipped worldwide today.
Uh .. those reports indicate that over time .. uh .. the LRDIMM .. uh .. may become 10-15% .. uh .. of that market.
Um .. my my personal view is that it will probably NOT be that large.
Uh .. the difference in .. uh .. uh .. the way that chip manufacturers, buffer manufacturers like an Inphi .. uh .. address that business opportunity is different from ours.
They are selling a chipset that .. uh .. you know that is valued at $10-$20, whereas we are selling an entire memory module .. uh .. that is valued is valued at anywhere between $300-$400 up to $1200-$1500 depending on the density. Primarily it will be 16GB and 32GB.
at the 19:50 minute mark:
So .. we believe the market will be certainly in the millions of units .. uh .. come next year.
With the LRDIMM and the HyperCloud .. um .. and .. uh .. at some point down the road as the Romley matures .. uh .. that it may .. the percentages may get into the teens (i.e. above 10%).
For next year, I think it will be a smaller portion, but for us it’s still a a tremendous opportunity .. uh .. you know ..
at the 20:25 minute mark:
Rich Kugele at Needham:
(here he interrupts Chuck Hong)
From selling .. selling the module .. so much more .. than if you were just selling a chip .. right ?
Chuck Hong:
Absolutely. Absolutely.
at the 20:35 minute mark:
You know even at a let’s say if there are 70M RDIMMs being shipped today, registered DIMMs, and um .. the .. opportunity for the the high performance module is about a million units .. uh uh .. at an ASP (average selling price) of .. uh .. $500 let’s say.
That’s a $500M market opportunity.
Rich Kugele at Needham:
Ok, uh .. and then I guess just lastly on .. on on some of Gail’s comments most recently there .. uh .. at the end of .. prepared remarks .. um .. it seems at 50%-100% of revenue growth year over year (i.e. in one year) that that would HAVE to be .. HyperCloud you know I guess you’re not breaking that out at this point.
But you know maybe another way of approaching it is what do you expect the Vault business to do year over year and the more traditional memory side as well.
at the 21:30 minute mark:
Chuck Hong:
Our plan shows that .. uh .. the revenues will increase throughout our product line with the exception of the PERC business, the battery-backed .. and the battery-free solution that is being shipped into DELL.
That will be .. that will decline .. um .. with the introduction of their next-generation servers .. um .. next year.
But with the exception of that, NetVault (NVvault), flash, speciality DIMMs including VLP (very low profile memory for blade servers) .. uh .. which will be shipped into .. uh .. a major OEM .. uh .. blade server .. uh .. and then HyperCloud.
All of those will show .. uh .. double-digit growth in terms of revenues. All those segments.
So that’s .. if you add that up, we ARE looking at revenue growth that is exceeding 50% for 2012.
50% growth over this year.
at the 22:50 minute mark:
Rich Kugele at Needham:
Ok, then just one last one um .. Gail what was the actual cash burn in the quarter or cash from operations cash used in operations ?
Gail Sasaki:
at the 22:55 minute mark:
Um .. the cash from operations burn was under a $1M.
And then we had a .. um .. some purchases of fixed assets in range of .. about $300,000.
And then we had some jet service (?) about $400,000 (what is this jet service ?).
Rich Kugele at Needham:
Ok, great. Thank you very much.
Gail Sasaki:
Thank you Rich.
at the 23:40 minute mark:
Jeff Martin of Rodd (?) Capital Partners:
Thanks. Good afternoon and thanks for taking my questions.
Wanted to get a sense of .. whether the HyperCloud orders in the quarter were from previously announced vendors or from new vendors.
And if .. uh .. if that pertains to Gigabyte and Swift .. uh .. could you clarify ?
at the 24:00 minute mark:
Chuck Hong:
Jeff, can you repeat your question ? Sorry.
Jeff Martin of Rodd (?) Capital Partners:
Sure. The HyperCloud shipments in the quarter.
I believe it was a $1M .. $1M of revenue.
Were they from previously announced vendors or from .. uh .. from new vendors.
Chuck Hong:
Yes, previously announced customers.
Jeff Martin of Rodd (?) Capital Partners:
Ok .. and .. in terms of the application, were those mainly servers or storage ?
Chuck Hong:
Uh .. mostly .. uh .. servers.
at the 24:35 minute mark:
Jeff Martin of Rodd (?) Capital Partners:
Ok, and then can you can you kind of give a sense of the opportunity and how many data centers are those customers running in and how large could these initial – I assume these are more on the initial order side of things – and how how much could those ramp .. um .. over 2012.
at the 24:55 minute mark:
Chuck Hong:
I think the .. the revenues that we are generating today .. uh .. over the last couple of quarters on HyperCloud are .. from qualifications and socket wins, design wins on Westmere.
And some even from Nehalem.
Our product is uh .. backward compatible .. to those systems.
But the vast majority of the volumes that we expect .. for 2012 HyperCloud revenues .. will come from adoption into the Romley .. uh .. processor based servers that will launch in February .. um .. expected to launch in February 2012.
at the 25:45 minute mark:
Jeff Martin of Rodd (?) Capital Partners:
Ok. And then in terms of qualifications, how should we think about that. How many OEMs are you qualified with today. Do you think the next wave of qualifications comes .. uh .. in that February (2012) time-frame with Romley rolling out or are you .. do you think you will qualify ahead of that ?
at the 26:05 minute mark:
Chuck Hong:
Uh .. the current qualifications are probably half a dozen .. uh .. large OEMs .. on Westmere and Nehalem boxes.
Uh .. the Romley qualifications .. the qualifications on Romley systems are with major OEMs .. the top handful of of server OEMs in the world.
So those qualifications .. um .. systems are launching in February.
Those .. uh .. will be completed as we speak here. Today.
Um .. the systems will be .. locked down with the final qualified bill of materials here over the next .. month or two.
So the work is being done today.
at the 27:05 minute mark:
Jeff Martin of Rodd (?) Capital Partners:
Ok. And then in terms of gross margins on HyperCloud.
I assume the margins during the third quarter (Q3 2011) were typical of kind of an early ramp .. uh .. ramping product.
And if you can comment to .. to the gross margins for HyperCloud in the quarter and your expectations for .. for the gross margins of the product next year.
Sure. The HyperCloud shipments in the quarter.
at the 27:25 minute mark:
Chuck Hong:
Our gross margins have steadily increased through the course of this year.
Uh .. and HyperCloud is within that range of gross margins.
It will neither .. we don’t believe it will greatly increase the percent margins or dilute it.
However we believe that in terms of absolute dollar contribution .. uh .. HyperCloud will be the largest dollar .. uh .. largest contributor to gross margin dollars next year.
at the 28:00 minute mark:
Jeff Martin of Rodd (?) Capital Partners:
Ok. And then finally on the SSD side.
Um .. what are the products that you are targeting there.
And you know does the SSD revenue – is that booked as legacy revenue or how is that recognized ?
at the 28:25 minute mark:
Chuck Hong:
The SSD products if you look at what we have are different from .. uh .. the standard SSDs that replace the hard drive .. they are in the PC or in the enterprise source (storage ?) space.
Our SSDs are industrialized, they are small form factor SSDs.
Uh .. they are ruggedized. But they are much smaller capacity.
And they are used in different applications – let’s say in a boot drive in a server which does not reqiure the density of you know of a hard drive.
Um .. and there are certain industrialized applications – in factory automation, and in medical equipment.
Um .. so .. it’s a different kind of SSD .. than what you know what is being sold in the mainstream.
Uh .. into the .. uh .. storage space.
There is not a hard drive replacement.
Jeff Martin of Rodd (?) Capital Partners:
Ok. Thanks for your time.
Chuck Hong:
Thank you once again for following our progress this quarter and we look forward to reporting back with fourth quarter results (Q4 2011) in February (2012).
And we look forward to seeing some of you at the SC’11 next week in Seattle.
Thank you.
Accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=36509&mid=36509&tof=1&frt=2#36509
NLST Q3 2011 earnings call transcript (not exact) 1 minute ago
UBS conference transcript (not exact) – Nov 17, 2011
http://www.netlist.com/investors/investors.html
UBS Global Technology and Services Conference
Thursday, November 17, 2011 9:00:00 AM ET
http://cc.talkpoint.com/ubsx001/111511a_im/?entity=63_EIUMYWQ
participants:
Chris Lopes (VP Sales and Co-Founder)
Gail Sasaki (CFO)
Chris Lopes:
Well I’m excited to be here this morning.
Let’s see .. so many faces out there.
We’ll be giving you a quick overview of our company and our markets, and specifically some of the IP that is generating so much excitement .. uh .. recently.
So NLST was a .. let me cover with our standard financial disclosure statements. You know what that looks like.
NLST was started about 11 years ago – three of us started this company. We saw the need to develop subsystems for large OEMs in the server and networking space, predominantly around memory.
And as such it became an extension of the engineering arm of our customers – in some cases called in to solve some of their most difficult problems which helped us to grow our expertise in that area quite a bit.
In the last 11 years we’ve .. accumulated over $750M worth of revenue .. in DRAM and flash-based subsystems.
So a subsystem plugs into a system – it doesn’t have it’s own power supply – we use a variety of semiconductor devices and we’ve designed some of our own semiconductor devices to go on those subsystems.
We have design centers in San Jose (CA), in Irvine (CA) and our manufacturing is done in Suzhou, China.
at the 01:15 minute mark:
We serve a very elite customer base – the largest server manufacturers of the world – IBM, HP, CSCO, DELL .. uh .. EMC, NetApp, FFIV are all customers of ours today.
And today we will talk a little bit about some of the IP that is real exciting for the people in this space.
One is called HyperCloud, and the other is NVvault – so we’ll spend some time on that – we have about 60 patents issued and pending right now.
So pretty exciting time ..
at the 01:45 minute mark:
If you look at our markets .. and two areas we will talk about today .. the cloud data center, the cloud computing area .. we’ve got the servers and then storage piece of that.
The servers represent over $4B of opportunity for us. And the storage about $600M.
So let’s spend a little time on the products that go into the server space.
at the 02:05 minute mark:
So as we talk about cloud computing – we are focused on the applications that require high .. uh .. amounts of DRAM.
So large capacity DRAM – so you are looking at things like high-performance computing, securities trading, things that we want to put a database off a disk and move it right into working memory.
at the 02:25 minute mark:
The market for cloud server units is expected to grow at about a 20% clip over the next 4 years.
So it’s an exciting market for us.
If we take a very quick snapshot of the size of the market for us.
Industry analysts are estimating about 20% of the newest and latest Intel (INTC) family of servers – Romley family – would use a “load reduced” or a “rank multiplied” memory.
And that’s what we call our “HyperCloud” memory.
at the 02:55 minute mark:
Whether you agree with that or not, let’s just use a ONE percent (1%) of that number – to keep the math really simple.
You get an idea of how big and how fast this market can grow.
So if we use a 1% estimate – there’s about 9M servers sold in the world this year – so let’s take 1%, let’s call it a 100,000 servers for next year.
And each server that uses high density memory typically fully loads that memory – and that can be anywhere from 12 to 24 sockets (DIMM sockets/slots) in each of these servers.
Let’s just use 10 to 12 (sockets/slots) to keep the numbers easy again.
So we take a 100,000 servers – we take 10 DIMMs per .. 10 memory modules in each one – you’ve got a million (1M) units.
Well millions of anything is not a great market .. USUALLY. Because we are talking semiconductors in the conference here today .. uh .. chips are $10 to $20, $30 .. but in our case we are selling subsystems.
And our subsystems average between the 16GB and 32GB around $500 each.
at the 03:50 minute mark:
So even at a very very conservative estimate .. 1% of the servers, only 10 DIMMs per server, we are looking at a $500 ASP (average selling price) or $500M in revenue for next year.
Now that’s a pretty significant growth from where we are today .. so .. hence the excitement about the opportunity in this market.
at the 04:10 minute mark:
There are several reasons that the industry needs high performance and HyperCloud memory .. the server are limited by the densities of the DRAM available from companies like Samsung, Micron, Hynix.
And Moore’s law is really catching up in the DRAM industry. It’s become very very difficult to scale past 4Gbit (DRAM die).
The 8Gbit (DRAM die) are likely the last monolithic density we’ll see and maybe only from Samsung.
There is a HUGE investment required to get to that .. those final lithographies .. sub-20 nanometer in memory.
You’ve got a storage cell that becomes a problem.
So while processors can scale effectively, we are running out of room in the DRAM industry.
at the 04:55 minute mark:
So there’s some applications and consortiums and people are saying “how do we solve this problem”, so Intel (INTC) and AMD and the large server OEMs need solutions to keep up with the high .. high horsepower of the CPUs with the memory .. so .. uh .. with multi-core and higher frequencies we got a lot of challenges ahead of us as an industry ..
at the 05:15 minute mark:
So .. some of the alternates are to STACK the memory – that’s called 3DS.
To take 4Gbit DRAM (dies) and stack them on top – so build 2-storey and 4-storey buildings, if you will.
And there are challenges with that, but the industry is working on that.
Another one is to use “rank multiplication” which is what we do .. and LRDIMM attempts to do that ..
And then other people say “well let’s get out of DRAM altogether – let’s look at SSDs” .. you are seeing a lot of SSDs being used in the data center today.
That’s a patch .. it’s an effective patch TODAY.
However, we see a growing number .. a growing percentage of applications where that doesn’t, it doesn’t give them the performance they are looking for ..
And then .. uh .. there’s even companies making SSDs that bypass the SAS and SATA interfaces and go right on the PCI card – we make some of those products ourselves – however that becomes kind of a disaggregated way of going to market, and we feel you need to work holistically WITH the server designers to really get the best solution to their end-customers.
That’s what we do – we work directly with the design teams of our customers.
at the 06:20 minute mark:
If you look at where technology can impact the performance inside of a server .. they’ll try and go straight .. pretty quick .. a lot of people have heard about moving from hard drives to solid state drives (SSDs) and some of you may have done that yourselves .. and it IS pretty cool to be able to boot up instantaneously ..
at the 06:40 minute mark:
The whole idea there is the access time improves – so when you go ask for data, the processor needs data .. has to pull it from somewhere ..
So if it pulls it off the drive it’s typically in the milliseconds, sometimes 10 milliseconds (10 ms) to get that data .. if we go off an SSD we can go about a 100 times to a 1000 times faster in the tens of microseconds (10 microseconds).
If we put that into the .. on the PCI bus we get a little faster.
But when we move that all into DRAM we go another 1000 times faster – so you can think of a DRAM access from memory about a million times faster than a hard drive access to the memory.
at the 07:10 minute mark:
So if you are running very large scale models – and I just came from the Supercomputing (SC’11) show in Seattle (WA) yesterday.
And this is, you know NASA on steroids.
This .. these guys are some of the brightest scientists in the world are there to learn HOW do I make my performance run faster.
So it’s equivalent in the automotive industry – everyone will talking about the latest turbo-charger and how do you make this thing faster.
at the 07:35 minute mark:
So pretty exciting when we can put complete models into the DRAM space, and so that’s what we focus on doing.
at the 07:40 minute mark:
There are a number of applications where we have run some benchmarks – these are just a few (referring to slides) .. uh .. finite-element analysis and modelling – we are seeing a 21% improvement.
Data analytics – up to over 30% improvement.
Uh .. floating point benchmarks are running over 10%.
EDA (electronic design automation) – guys that are building the great processors .. are seeing 15%.
Some of the fun things we get to do is work with race car teams like Red Bull Racing and the computational fluid dynamics (CFD) they are able to build faster cars and simulate those faster by using our memory.
And even quantum dynamics (?) – if you want to crash a few cars and see what that looks like on computer .. uh .. you can do that faster.
at the 08:15 minute mark:
So these speed up the rate of innovation in our customers’ customers space .. because we improve the performance of the infrastructure.
at the 08:25 minute mark:
So how do we do that ?
So HyperCloud memory is a subsystem – it is about a 5 inch memory. Plugs right into a standard industry socket (DIMM socket).
We take off the cover and look underneath, there’s two key pieces of IP that NLST invented and creates and has produced.
One is our register device – you see that in the center of the (memory module) card – the register acts as a doubler.
Doubles the amount of memory that the system can access – so it takes 4 ranks of memory on the DIMM and makes it look like 2 ranks of memory to the system.
And that is important because there are limited number of ranks available coming out of the processor.
So we get to double that.
at the 09:00 minute mark:
And AS important .. uh .. some people would tell you – becoming even MORE important as we increase speeds is our isolation device technology – that’s a mixed-signal (i.e. analog plus digital) ASIC .. put 9 of those on our DIMMs – those act as buffers .. uh .. for the data lines, effectively minimizing the electrical load on that bus to make it look like one load instead of 4 loads.
at the 09:25 minute mark:
So we can, as a result, and you may have seen the press .. yesterday I think we had press release about our testing and demonstration – we are running a FULLY LOADED system at 1333 speeds (1333MHz) – no one else can do that today ..
So .. not only do our customers need high density, they need FAST high density.
at the 09:40 minute mark:
So we found a way to do that .. it’s complicated .. took us a long time to do this but .. with the benefit of seeing customers’ problems for almost a decade, we’ve been able to build on what works and what doesn’t .. with the whole ecosystem in mind, and create some unique products.
We can NOW put 768GB of RAM into one 2-processor server .. that’s a lot of RAM .. and you can do a lot of things ..
at the 10:05 minute mark:
Uh .. so we make the 16GB and 32GB versions of that – plugs in, JEDEC compatible, no BIOS changes required .. and uses our patented rank management and multiplication”.
at the 10:20 minute mark:
So .. in a 2 processor server, this is what it looks like – you’ve got 12 memory sockets applied to each CPU – you put them together you maximize the memory.
at the 10:30 minute mark:
Uh .. nice endorsement recently came out as we introduced our 32GB just this week.
And we demonstrated it for the first time at the Supercomputing (SC’11) show.
Uh .. this is an endorsement .. uh .. from one of the engineering vice-presidents over at Hewlett-Packard (VP Engineering at HP) who says his customers are looking for greater memory capacity AND bandwidth, which is what we just talked about.
And that the NLST HyperCloud product helps customers achieve this.
at the 10:55 minute mark:
There are alternate technologies that people are pushing on the market, to try to get more capacity on a server.
One is called LRDIMM – they are “load reduced DIMM” .. so I’d like to compare very quickly so you see where we stand versus that.
The LRDIMM is on the top of this chart, and it contains one very large memory buffer in the middle.
And you see ours by comparison has a standard sized register – although we have some secret sauce in that register.
And a 9 isolation devices along the (bottom) – that is called a “distributed architecture”.
at the 10:25 minute mark:
So the monolithic architecture on the top, you can see from the chart .. the data paths .. so for a signal to go from the edge of the connector and .. to pull memory out to come back, it has to follow the blue traces .. all the way in to the memory buffer.
Follow the orange traces to the particular DRAM, the orange traces BACK to the memory buffer, and the blue traces all the way back out to the edge (of the memory module) card.
at the 11:55 minute mark:
So those are some fairly significant-sized highways, if you will.
If you are thinking about navigating this as a city, and that .. we call that “latency” ..
So that latency for when you want memory to when you get it, is much greater on a monolithic design, becaues the highways are so long ..
By contrast, the HyperCloud memory, has very short data paths.
at the 12:15 minute mark:
So what we do in .. one clock, our competition takes 4 to 6 clocks to do.
So that’s significant for those high performance computing applications that not only need high density, but they need to access that data quickly.
at the 12:30 minute mark:
So the distributed architecture was rather .. uh .. aggressive at it’s time, when we first came out with that.
But what we found since is that as the industry standards bodies (JEDEC) looks ahead for 3 and 4 years, and they look to the next memory density at DDR4 .. which will come out in about 2014 .. they realized that at the higher frequencies .. that the distributed architecture’s really the ONLY way to achieve those speeds .. without inducing such tremendous latency penalties.
at the 12:55 minute mark:
So .. here we show a drawing of the .. uh .. distributed buffer concept that the .. JEDEC is promoting for DDR4 and below it you see our actual DIMM design for DDR3 and you notice the similarities (laughs) .. that they are very .. almost identical, aren’t they ?
at the 13:10 minute mark:
So there’s there’s a reason for that .. so as the industry standardizes on that distributed architecture that something again that we have a lot of IP around .. there’s a LOT of known-how as well on how to make .. the .. the buffers along the edges work with the register, and in the center .. and you really need to design the thing WITH it’s end-application in mind ..
You can’t just approach it as a semiconductor-only company .. and say I’ll make a chip that in .. because there ARE a lot of timing nuances that you need to understand between the two.
at the 13:40 minute mark:
So we feel VERY well positioned for DDR4 .. we feel this architecture .. we have a significant lead in the industry .. in making this product work.
at the 13:50 minute mark:
And we have been doing that a long time – as I mentioned .. you know .. over 10 years of working directly with our customers engineering teams.
We actually got the idea for this “rank multiplication” back in 2003 through our working with Apple (AAPL). So Apple’s very dear to our heart.
Apple was using a PowerPC .. uh .. processor in their Xserve .. uh .. server and there was some rank limitations .. and how many .. how many memory they could access and they wanted a higher density memory and didn’t know how to do it.
at the 14:17 minute mark:
So as got involved and we were able to .. to figure out we could double these ranks on the DIMM, we could effectively build a larger .. uh .. DIMM size for them and we did.
And we sold several millions of dollars on that, but more importantly, it gave us the ideas of what we could do by building some controller technology onto the memory subsystem .. and building some silicon.
at the 14:35 minute mark:
So we were able to use some programmable logic at first .. in using our ideas, but then as the frequencies increased, we went with our own ASICs (application-specific integrated circuit).
And so we did that for DDR2 and now for DDR3 and we are well positioned for DDR4, so you can see we’ve .. several patents that started way back in 2004 along this way ..
And we continue to innovate .. uh .. with this and just recently we announced a couple of collaboration agreements with some very large .. uh .. customers of ours, as we look not only for the next-generation but but another generation beyond .. as a use (of) HyperCloud technology.
at the 15:10 minute mark:
So now we .. remember we talked about what the market looked like for next year and we just said .. “well if it was just 10%” .. uh .. or 1% rather .. 1% of the market.
We had a $500M market opportunity.
Now let’s look out for 2014 .. because as we increase to DDR4 speeds, the frequency goes WAY up .. and when the frequency goes up, the effect of the bus .. the memory bus is huge.
at the 15:30 minute mark:
So the industry’s estimating they’ll need 50% of all servers .. and it will be about 13M to 14M units, up from 9M today .. uh .. will require some kind of “load reduction” (technology) .. HyperCloud-type technology (or the LRDIMM which is infringing NLST IP – though LRDIMMs have latency issues).
If we take 10% of that, let’s call it 1.3M servers and let’s use 12 DIMMs per server, as an average .. now the densities move up .. so instead of 16GB and 32GB today, we’ll talk 32GB and 64GB .. 3 years from now .. we’re looking at ABOUT a $7.5B market size.
at the 16:05 minute mark:
So .. significant growth .. we think we are well positioned for where the industry NEEDS to go, where it wants to go, and how to get there.
And our technology scales very well .. along that.
at the 16:15 minute mark:
So that wraps the .. HyperCloud IP part of our product line.
We’ll touch briefly on the NVvault – think of Vault as a safe – it’s a safe place to put your data.
So we created – along with work with .. uh .. several of our large OEMs .. DELL in particular .. a way to get rid of batteries in caching applications by using a combination of DRAM and flash.
at the 16:40 minute mark:
And oh and .. while doing that .. and we’ve done several generations of work WITH the battery .. so WE were trying to get rid of the battery as well as our customer’s trying to get rid of the battery .. we found a very viable solution today .. and we’ve expanded this into a family of products.
So we make these products for RAID caching and our new DDR3 NVvault is available to go directly into the mmemory bus (DIMM sockets/slots) for next-generation Intel servers.
at the 17:05 minute mark:
So we are working closely with Intel (INTC) on that. But it encompasses .. uh .. some of our IP in a digital controller .. we have put flash on one side, DRAM on the other and then you see over on the (referring to slides) .. on the left the little .. uh .. ultracapacitor backup.
So all that does is hold enough charge to mirror the data from the DRAM into the flash – it does that in about 30 seconds .. and then when the power comes back up on the system in about 4 seconds it pulls it right back into the DRAM and you are operating.
So how many of you have ever shut down your computer or had a power go out on you when you are in the middle of something ? You have something like this it would really protect you from that.
And you can imagine in a data center how important that is .. to be able to cache that.
at the 17:40 minute mark:
So you can see that NLST and Intel (INTC) are well-aligned – on the Vault product we are working directly to bring that to market.
That helps Intel move more into the storage and RAID adapter area – something they are interested in .. and along the HyperCloud, with the DDR4, Intel’s already proposing a distributed architecture .. uh .. to JEDEC.
at the 18:05 minute mark:
And .. that closes the technology .. uh .. gaps .. today.
Samsung has a significant lead in the DRAM lithographies .. Hynix, Micron, Elpida are catching up.
And we help them bridge that .. and bring a .. a coherent ecosystem .. for our customers. So they can get the densities they want.
at the 18:20 minute mark:
From a financial standpoint .. uh .. kind of about 12 quarters (of) CONSECUTIVE gross profit growth .. revenue growth .. uh .. we are near .. uh .. EBITDA breakeven .. right now.
So .. uh .. finished our last quarter (with) a little over $16M in revenue.
at the 18:45 minute mark:
Uh .. I’ll skip this one (probably referring to a slide ?) .. leave it for questions later on GAAP to non-GAAP.
And we’ve been able to scale our business over the last couple of years without significant increases .. in fact almost keeping our SG&A flat .. while more than doubling or almost doubling our R&D over the last two years.
So it is an efficient model .. that scales well because we have large customers.
at the 19:05 minute mark:
Uh .. once you get designed in, you get .. uh .. you ride the lifecycle for that .. uh .. product with them and and the volumes go significantly (higher) ..
at the 19:15 minute mark:
And we’re moving towards our steady-state model of profitability, which is about a 35% gross profit and about a $150M in revenue.
Uh .. we anticipate R&D will take about 10% of that .. SG&A about 7% .. and bring about 18% to the bottom.
And we should end next year .. uh .. on those run-rates .. uh .. pretty excited about that.
at the 19:40 minute mark:
So .. balance sheet highlights .. I’d be happy to answer questions about that.
at the 19:45 minute mark:
Key takeaways .. we were established .. 11 year history .. you know I got to tell you when you are working with these major OEMs it sometimes takes 4 to 5 years to get qualified .. uh .. just to get in the door.
And it did with IBM .. it did with HP .. Apple took a couple of years .. uh .. Dell (DELL) moved pretty quick with us when we had the right product.
at the 20:03 minute mark:
So there’s some significant barriers to entry .. uh .. there .. uh .. the trends are favoring what we do in the HyperCloud and NVvault – people want to save data, they want more of it, and they want it faster.
And the industry itself is having difficulty doing that with the monolithic approach .. our IP is .. uh .. pretty well established .. uh .. we’ve battled in court a few times to protect it ..
And we have a predictable baseline of business .. and we are adding this new high-IP content to it ..
at the 20:30 minute mark:
And the same management team working together here for .. you know .. some of us 7-8 years together and the founders’ still active .. members of the company for last 11 years ..
So I’d like to thank you for your attention today .. and open up for questions.
Yes.
at the 20:55 minute mark:
Question and Answer session:
Questioner1:
Um .. you mentioned that $150M of revenues will be roughly your breakeven in terms of ..
Chris Lopes:
That run-rate .. yes.
Questioner1:
That would be showing a .. a profit ..
Chris Lopes:
Hmm ..
Questioner1:
.. pre-tax profit ?
Chris Lopes:
Pre-tax profit of 18% ..
Questioner1:
And and .. when would you be able to achieve that .. which date, year or quarter ? When when do you anticipate ..
at the 21:15 minute mark:
Chris Lopes:
We think we can reach break-even somewhere near Q2 of next year (Q2 2012) ..
Questioner1:
Q2 of next year ..
Chris Lopes:
And show a profit of that .. and get to that run rate before the end of the quart.. end of the year next year .. Q4 (Q4 2012).
Questioner1:
Great.
And and your balance sheet – I didn’t see that .. it was up so fast.
What cash do you have on your balance sheet – cash and debt.
at the 21:35 minute mark:
Gail Sasaki:
(speaking in the background – faint audio)
At the end of the quarter we had about (unintelligible) in cash and about $3M in in (debt) ..
Chris Lopes:
So $11M in cash and ..
at the 21:50 minute mark:
Questioner1:
So what what are going to be your cash requirements .. um .. this year and next year .. um .. and and when .. I assume you will be turning cash flow positive right around breakeven second quarter (Q2 2012) .. so ..
Chris Lopes:
Right ..
Questioner1:
So .. your cash .. will last what .. how long will that cash last ?
Thanks.
Chris Lopes:
Well, our burn rate last quarter was less than a $1M .. uh .. for the quarter.
And our sales are continuing to grow .. and our gross profit is continuing to grow so ..
We are in pretty good shape there.
at the 22:30 minute mark:
Questioner2:
(this seems like the moderator – since he closes the conference proceedings at the end)
Chris, that maybe a quick question from me .. uh .. in terms of the collaboration that you’ve done with Intel (INTC) .. uh .. can you talk a little bit more about .. you know the I guess the products that you are working on for the Romley generation of servers there .. currently ramping ..
And more importantly going forward as Intel (INTC) continues to .. uh .. I guess evolve their memory subsystem architecture in terms of where where pit exactly touch .. I guess attaches to the system .. how does that affect the way that .. uh .. that your products are being implemented in servers.
at the 23:00 minute mark:
And also in the context of a product that Micron and Samsung (unintelligible) .. the “hybrid memory cube” .. how does that also .. well I guess affect the .. uh .. the performance and the competitive landscape for you guys.
at the 23:10 minute mark:
Chris Lopes:
Well that is a good question .. so .. the .. the collaboration with Intel (INTC) right now is primarily around our NVvault product for the Romley servers.
So we are building a combination DRAM and flash – which is similar to the “hybrid memory cube” .. uh .. although we match the densities .. identical ..
So 2GB, 4GB, 8GB of DRAM backed by 2GB or 4GB or 8GB of flash .. and that works right into a memory .. directly on the memory bus (i.e. DIMM sockets/slot) ..
at the 23:35 minute mark:
So if there are 24 sockets on that new Romley based server, you can fill all 24 (sockets/slots) with that and really create quite an effective .. you know .. virtual SSD .. uh .. running at DRAM memory bus speeds.
Uh .. the “hybrid memory cube” .. I’m seeing some interesting write-ups on that .. uh .. it seems to be geared at first for some more mobile applications .. difficult to get the densities there (probably means that for mobile applications it would be difficult to create large sized memories that fit in small form factor) ..
at the 24:00 minute mark:
Uh .. but that trend is a very positive one for us.
We have already looked at combining our IP on the multi-ranking (“rank multiplication”) with HyperCloud with the controller technology we are developing on flash .. to look at building something that might look similar to that .. uh .. with a combination DRAM and flash .. but NOT with an exact matching of .. densities (i.e. c.f. the “hybrid memory cube”).
at the 24:25 minute mark:
So you imagine you got large flash .. which is the lowest cost per bit .. uh .. memory out there .. buffered by high speed DRAM .. and you .. give you the best of both worlds ..
And there is some significant IP challenges .. uh .. to doing that .. uh .. effectively .. but we think we have a head start ..
Moderator:
(maybe same as Questioner2)
Uh .. we’ll wrap it up there .. and there will be a break-out session downstairs .. and we will ..
Thanks Chris for your attendance.
Chris Lopes:
Thank you. Thank you.
Accompanying summarizing thread on the NLST yahoo board:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=38064&mid=38064&tof=1&frt=2#38064
UBS conference transcript (not exact) – Nov 17, 2011 21 second(s) ago
Bill,
I think you maybe getting some eyeballs this time – as NLST is right now in the spotlight.
I hope you don’t mind – but having the ability to post transcripts here is of great help to folks on the NLST yahoo finance board (since yahoo has a 4K character limit for posts and these transcripts can be very long).
Also this blog post contains some pretty significant history (at least in one place) that those who wish to do research may find useful.
NLST has achieved breakeven (as expected) – but more significantly they have just announced deals with IBM and HP (the HP one is “exclusive” – possibly for the 32GB 2-rank memory module they have announced).
Prior to that – in our discussions on the board we have established that:
– LRDIMMs have a “5 ns latency penalty” compared to RDIMMs.
– CSCO UCS has a “6 ns latency penalty” compared to RDIMMs.
– NLST HyperCloud have similar latency as RDIMMs (a huge advantage)
This has become clear from the IDF conference on LRDIMMs (video on Inphi blog main webpage) – where HP and Samsung state that (because of the latency hit) they may not even bother competing with 16GB 2-rank (based on the new 4Gbit DRAM dies).
They would however sell 32GB LRDIMMs because with those the only competition is 32GB 4-rank RDIMMs which have to run slower – because at 2 DPC (2 DIMMs per channel) the 4-rank DIMMs will not be able to achieve 1333MHz.
In any case that video has cleared up any doubts about the inferiority of LRDIMMs.
Separately, it seems the CSCO UCS solution (ASIC-on-motherboard) has latency issues as well -as has been documented in some articles on the web.
So with this the roadmap for NLST HyperCloud has become clear – it will be the predominant memory available for Romley (as well as legacy systems).
NLST has also made some statements about JEDEC having arrived at the same “distributed architecture” (compared to LRDIMM’s 628-pin register approach) for DDR4 – because they cannot run DDR4 at it’s higher speeds UNLESS they use the NLST approach.
So NLST is confident in stating that it WILL intersect with NLST IP – and NLST could own that space.
NLST was presenting at the Supercomputing SC’11 conference and the HP/IBM deals were announced during that. In addition NLST has announced a 32GB 2-rank memory module (using 4Gbit DRAM dies).
Since NLST has said they can make 32GB using just 2Gbit DRAM dies – thanks to it’s Planar-X IP (and this is referring to a not just usual 2x Planar-X, but a 4x Planar-X variety which IS mentioned on the patent docs – where they use 4 PCBs) – so it could be that these could be done this way (or maybe just 2x Planar-X and using DDP i.e. dual-die packaging version of the 2Gbit DRAM dies i.e. 2 x 2Gbit DRAM dies in one package).
We have concluded from this on the NLST yahoo board that this means NLST could also make:
– 64GB memory module using 4Gbit DRAM die
In any case, these are significant developments and it is becoming possible that HyperCloud could become an almost complete replacement for LRDIMMs in 2012 (for Romley). With HyperCloud becoming “mainstream” for DDR4 – as for DDR4 speeds the problems that occur at 2 DPC occur at the 1 DPC level i.e. the problem will present itself not only for high memory loading but even for lower density memory use at those high speeds.
On the litigation front – NLST is seeing positive things with the award of it’s newest patent – which is a continuation of the HyperCloud patent stream – and they point to the USPTO Office Action which seems to have included most of the IP being used against NLST in the reexams – and after including that in the examination the USPTO has allowed the NLST patent to go through – so evidently that is being seen as valuable.
Hi netlist,
This post is turning into its own little mini-site, but I don’t mind. Thank you for keeping on top of the topic. It’s interesting watching the story of netlist over time.
Thanks.
By the way, are you noticing more traffic to this webpage recently (with the greater exposure) ?
I suspect Google has done its homework.
Hi Netlist,
Thanks. It does look like there were a number of extra visitors to the site. It’s hard to tell at times because I sometimes will get a lot of traffic from new posts, especially if they are very timely, and sometimes an older post will stir up a good number of visits as well.
Hi Ray,
It does look like Google has acquired a good number of patents over the past 1 1/2 years involving computer and networking hardware, which is pretty interesting on its own. While much of that might be to protect what they are doing in their own offices and data centers, it points to another business direction that they could potentially take.
Check out the excellent article on NLST – by same author who wrote a great article about NLST earlier (second link below):
http://www.theregister.co.uk/2011/11/30/netlist_32gb_hypercloud_memory/
Netlist puffs HyperCloud DDR3 memory to 32GB
DDR4 spec copies homework
By Timothy Prickett Morgan
Posted in Servers, 30th November 2011 20:51 GMT
Check out the comments section for that article as well – I have posted some comments there as well – comparison of LRDIMM vs. HyperCloud and why LRDIMMs have serious issues (“latency issues”, not able to support 1333MHz at full 3 DPC and the usual others – require BIOS upgrade and are not interoperable with other memory).
The earlier article by same author:
http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in Servers, 11th November 2009 18:01 GMT
Thanks, netlist.
I’m staggered by the 32GB size of the memory, and the multiple buffers both. That’s amazing.
This is an intro to HyperCloud and LRDIMMs as it relates to the Romley launch (Spring 2012) and general (inevitable) trends in the server industry that require LRDIMM/HyperCloud type solutions (to enable “high memory loading”) for Romley.
———-
Title: High memory loading LRDIMMs on Romley – An Introduction to Next-Gen Memory for Romley
Date: January 10, 2012
–
–
–
–
–
High memory loading LRDIMMs on Romley
Intel next generation Romley platform is expected in the Spring of 2012.
Intel has been promising LRDIMM (Load Reduced DIMM) memory will be available in time for Romley rollout.
LRDIMMs will allow greater memory loading – without the inevitable slowdown in speed that occurs when you fully load all DIMM slots with memory modules.
–
–
–
–
–
–
Why High Memory Loading ?
The use of lots of memory in a 2-socket server – filling out of most or all of the DIMM slots with memory modules – is needed for certain applications like:
– virtualization (powerful processors can now support multiple VMs – if only they had the memory to go along with that)
– cloud computing
– high performance computing (HPC) – computational fluid dynamics (CFD)
– computer aided design (CAD)
– in-memory databases (whole database is kept in RAM)
– high frequency trading (Wall Street firms which try to trade a split second before others)
However the trends are now in place which tend to bring those niches more into the mainstream – mass market/consumer use is dictating great leaps in server capability. Changes in consumer usage – moving away from desktop PCs to tablets and phones connected to the cloud – as hordes of non-tech folks become “computer users” through their smart phones.
This “high memory loading” technology will be important now because of a number of factors coming together now that is requiring greater use of memory per server:
– increasing cores per processor (multi-core) requiring higher total memory per server just to maintain the same memory per core levels
– power reduction achievable if you can cut server numbers in your data center/cloud farm
Thankfully the greater concentration of power per server leveraged well the availability of virtualization. Many more VMs per server became possible – reducing server count and thus server power, but also cost of power generating plant and associated UPS provisioning per data center.
And the scaling up of applications to use more memory – beginning with high performance computing (HPC), high frequency trading, in-memory databases (where all data in DRAM), virtualization and cloud computing (all of which call for multiplication of DRAM per server).
And the changes taking place in the marketplace even at the consumer computing level – as “computer users” (traditionally desktop and laptop users) – expand greatly in number with smartphones becoming the new computer. Allied with that the needs/availability of iCloud type services (5GB per user for nearly the whole world), Siri (voice recognition for the whole world) means you are talking about server requirements which vastly outpace all previous expectations. In fact it is not only a replacement for current computing, but it expands the whole user base and a multiplication of total computing needs as every person potentially becomes a computer user – even in the third world.
This need for search, voice search, and cloud storage per user drive the need for greater DRAM use per server.
This is a trend which will persist despite economic issues.
Thus server growth and DRAM per server growth is probably the most predictable robust growth segment in the memory space for the next couple of years.
–
–
–
–
–
–
–
–
Other uses for high memory loading
Sometimes you may not want the max amount of memory possible for your server.
If you want a modest amount of memory at a lower cost, you can choose to buy a more bargain-priced (say 16GB instead of 32GB memory modules – which can be more than 2x higher in price than 16GB modules) and populate all your DIMM slots to get a reasonable amount of total memory at a lower cost.
–
–
–
–
–
–
–
–
–
–
–
Why LRDIMMs or HyperCloud solutions are needed (memory load and “rank”):
It is now possible to add 768GB memory to a 2-socket (2-processor) server – a 2-socket server has 2 processors – each processor has 3 memory channels (or 4 with Romley) – each memory channel can support 1 DPC (one DIMM per memory channel), 2 DPC or 3 DPC.
Thus you can have on a 2-socket server:
2 processors x 3 memory channels x 3 DPC = 18 DIMM slots
2 processors x 3 memory channels x 4 DPC = 24 DIMM slots
You can now buy 32GB LRDIMMs (based on Inphi’s LRDIMM buffer chipset) or 32GB HyperCloud (available from Netlist) and populate these to 24 DIMM slots x 32GB = 768GB on a 2-socket server.
If you use 16GB memory modules you can have:
18 DIMMs x 16GB = 288GB
24 DIMMs x 16GB = 384GB
With 32GB memory modules you could do:
18 DIMMs x 32GB = 576GB
24 DIMMs x 32GB = 768GB
Since there is a “rank” limitation to consider when adding memory – older systems limited you to 6 “ranks” per memory channel and newer ones allow up to 8 “ranks” per memory channel – this means that to do 3 DPC you need to use 2 rank memory modules (with 4-rank memory modules you can only do up to 2 DPC).
The 32GB LRDIMMs and 32GB HyperCloud memory modules are 2-rank (or more precisely 2 “virtual” rank – as they use a technology called “rank multiplication” that Netlist holds IP for) thus you can populate up to 3 DPC. Currently you cannot use 32GB RDIMMs to do 3 DPC because 32GB RDIMMs available currently are all 4-rank memory modules – which means you can only do 1 DPC (if your server has a 6 “rank” per memory channel limit) or 2 DPC (for newer servers that have a 8 “rank” per memory channel limit).
16GB RDIMMs will be available for Romley at 2-rank (using the newer 4Gbit DRAM dies) and you can thus do 3 DPC while remaining within the 6 ranks (or 8 ranks for newer systems) per memory channel limit.
However, with RDIMMs you will experience speed slowdown as you add DIMMs per channel (DPC):
1 DPC – 1333MHz
2 DPC – 1066MHz (can be 1333MHz on some newer systems)
3 DPC – 800MHz
So when choosing memory, you need to be aware of:
– the “rank” per memory channel limit
– that as you add memory you want to make sure the achievable speed does not go down.
The speed slowdown will happen with RDIMMs but will not happen with LRDIMMs and HyperCloud (which is precisely why they were created).
This is related to electrical load issues and achievable speeds (without errors) going down as you memory modules to a memory channel.
In any case, with RDIMM at 3 DPC you can not get more than 800MHz (or 800 MT/s) out of the system.
With HyperCloud memory you can do:
1 DPC – 1333MHz
2 DPC – 1333MHz
3 DPC – 1333MHz
LRDIMMs seems to do similar, although it seems at 3 DPC they are only getting 1066MHz (and not the full 1333MHz) – see references below.
For instance at 3 DPC the Inphi LRDIMM docs suggest only 1066MHz can be achieved.
Since LRDIMMs have only now become available, Netlist has recently published a benchmark comparison between LRDIMM and HyperCloud – see references below.
LRDIMMs also seem to have a latency hit associated with them (Netlist mentions their “4 clock latency improvement” over the LRDIMM). The reason maybe the 628-pin buffer chip design that LRDIMMs seem to have chosen to solve the problem (more on this below).
Intel is pushing LRDIMMs because it had to come up with a way to handle the memory slowdown for high memory loaded systems – since the market for such systems are expected to increase in the future (analysts expect 20% of Romley servers will require LRDIMM/HyperCloud type solutions).
Several other players have tried “memory extender” solutions in order to fill this gap (in some cases they have one processor using the memory slots for the other processor in a 2-socket system):
HP BladeSystem Matrix
DELL FlexMem Bridge
IBM MAX5 memory extender
Cisco has implemented an ASIC-on-motherboard solution – and has probably gotten the most press.
LRDIMMs and HyperCloud both are ASIC-on-memory-module solutions.
–
–
–
–
–
–
–
–
–
Availability of LRDIMMs and HyperCloud
Inphi LRDIMM buffer chipset-based LRDIMMs and Netlist’s HyperCloud will be the only “load reduction/rank multiplication” products available at Romley rollout (more on this below).
LRDIMMs will not be usable until Romley arrives because LRDIMMs require a BIOS update in order to work on existing servers. Since updating BIOS is an especially disruptive thought to existing users, this might be the reason why Intel has targeted LRDIMMs for Romley – since Romley can ship with the LRDIMM-compatible BIOS in place.
Netlist on the other hand has been shipping HyperCloud memory since it can work on existing servers (without a BIOS update required). Netlist holds key IP in “load reduction/rank multiplication” and is the original inventor of this technology (in fact some folks may remember MetaRAM as the “original LRDIMM” – MetaRAM conceded to Netlist in Netlist vs. MetaRAM a few years ago) – see references below.
Inphi is also involved in litigation with Netlist – but while MetaRAM held significant IP in this area (the most relevant of which it had to concede to NLST as part of settlement, the rest was sold to Google), Inphi holds little IP in this area, since it is primarily a component supplier. Netlist vs. Inphi is continuing while Inphi has retracted the retaliatory Inphi vs. Netlist possibly because 2 of the Inphi patents that were being used may have been damaged (double-patenting claims) if they had proceeded with the case, in my opinion.
Since Netlist HyperCloud works on existing servers as well, it has already has been qualified by CMTL and a number of smaller OEMs (SuperMicro, NEC, Gigabyte).
HyperCloud is already being used by Swift Engineering (on the Cray CX1) for HPC applications.
After the recent Supercomputing SC’11 conference, both HP and IBM signed deals for HyperCloud with Netlist – the HP deal was exclusive (meaning HP would exclusively use HyperCloud – details are not known), while IBM was non-exclusive – see references below.
When Romley rolls out, HyperCloud will be a tried and tested product, while LRDIMMs will become available to testers for the first time when they have Romley systems in place.
Since changes in memory design can lead to some teething problems – HyperCloud itself took some time to iron out the issues with earlier qualifications – there exists a possibility that LRDIMMs may have problems meeting the top specs (i.e. the ability to achieve 1333MHz or more precisely 1333 MT/s at 3 DPC), especially when they will also need to work with BIOS modifications that will need to be available in all Romley motherboard versions.
Currently there are no other suppliers for LRDIMMs besides Inphi.
Inphi, IDTI and Texas Instruments are the top 3 buffer chipset suppliers for RDIMMs (Registered DIMMs – used in servers). However Texas Instruments has not been interested in LRDIMMs (possibly related to an earlier settlement in Netlist vs. Texas Instruments related to HyperCloud), and IDTI which initially showed enthusiasm, has been deemphasizing LRDIMMs over the last 3 earnings conference calls. Rich Kugele of Needham and Company comments during Netlist’s Q3 2011 conference call suggested that Texas Instruments and IDTI are both out of the race – which means they will not make it for the already-closed qualification window for Romley – see references below.
Details on LRDIMM performance had been sketchy for a long time – Netlist had hinted to “latency issues” with LRDIMMs in the past (and that LRDIMMs are not interoperable with RDIMMs, and require a BIOS update to work properly – unlike HyperCloud).
Recently however, Inphi’s LRDIMM blog has provided some useful information which has confirmed that LRDIMMs will experience higher latency, will not be interoperable with regular RDIMMs, and will require BIOS updates for existing servers (although this should not be a problem for Romley since those will eventually include the updated BIOS).
However, there is also some bad news with LRDIMMs – it seems their latency hit is so high that a 16GB LRDIMM will not be able to outperform a 16GB RDIMM (the ones that are 2-rank using the newer 4Gbit DRAM dies). This was disclosed in the question and answer session in the IDF conference on LRDIMM video on Inphi’s LRDIMM blog main webpage – see references below.
In addition, during this same conference, HP and Samsung suggested that they would not be pushing 16GB LRDIMMs (since they are non-competitive with the RDIMM version) and would focus on the 32GB LRDIMMs. The 32GB LRDIMM also have a latency hit, but they fortunately manage to outperform the corresponding 32GB RDIMMs because these currently are only available in 4-rank models (a 2-rank 32GB RDIMM may have to wait for 8Gbit DRAM dies which may not be available for a few years).
So how did LRDIMMs manage to get to this situation ? The answer may lie with “know-how” (that is an understanding of the technology beyond the patents held – but deriving from having actually invented the technology and from the knowledge of what works and what does not).
While Netlist holds the IP in this area, and has offered licensing on RAND terms to JEDEC in the past, this has generally not been listened to (it does not help that Netlist has a case of infringement against Google – related to Google’s use of “Mode C” which was being used in Google servers when the court forced Google to turn over a server to Netlist lawyers – Google had initially denied they were using “Mode C” and then accepted prior to turning over the server for inspection. Unbeknownst to many, Google produces it’s own memory modules for internal consumption that are allegedly violating Netlist IP.
However, Netlist has been working on their HyperCloud memory for 2 years since they have announced it, and being original inventors probably had a better grasp of the problems (since Inphi is a component supplier and has little IP and perhaps experience in this area).
So in summary we have for LRDIMMs:
– LRDIMMs require a BIOS update for current servers – but Romley will probably ship with appropriate BIOS updates to support LRDIMMs
– LRDIMMs are not interoperable with standard RDIMMs
– LRDIMMs have “latency issues” – so much that their performance renders the 16GB LRDIMMs non-viable against the 16GB RDIMMs (2-rank using the newer 4Gbit DRAM dies) and HP/Samsung state at the IDF conference on LRDIMMs that they will focus on the 32GB LRDIMM since that still retains an advantage vs. the 32GB LRDIMMs (because these are 4-rank and a 2-rank one won’t be available until a 8Gbit DRAM die is available in a few years).
– Inphi documents show them achieving a max speed of 1066MHz when running 768GB in a 2-socket server (see references below).
And for HyperCloud:
– HyperCloud is plug and play and requires no BIOS update (this is possibly related to Netlist IP in “Mode C” operation where the BIOS is fooled into thinking a lower “virtual” rank module is being used)
– HyperCloud is interoperable with standard RDIMMs
– NLST HyperCloud have similar latency as RDIMMs (a huge advantage) and have a “4 clock latency improvement” over the LRDIMM (see references below).
– HyperCloud runs 768GB @ 1333MHz in a 2-socket server (i.e. 3 DPC @ 1333MHz) (see references below).
–
–
–
–
–
–
–
Latency advantages
While both LRDIMMs and HyperCloud use an ASIC-on-memory-module approach to “load reduction/rank multiplication, LRDIMM design uses a huge 628-pin centralized “iMB” buffer chip with resultant long line lengths to DRAM and this might be the reason for why LRDIMMs have an inherent disadvantage when it comes to latency performance – see references below.
Looking at other “memory extender” options out there, Cisco UCS has tackled the “high memory use” problem with an ASIC-on-motherboard solution. This has led to some criticism (from HP) for adopting a non-standard approach.
Cisco UCS memory solution has been reported by some to have a “6 ns latency penalty” compared to RDIMMs – so it has a significant latency hit – in fact it is WORSE than LRDIMMs (if the 6 ns figure is correct) – see references below.
Compare this to LRDIMMs – which the Inphi LRDIMM blog reports have a “5 ns latency penalty” compared to RDIMMs – see references below.
So in summary if you compare the latency advantages for Netlist:
– LRDIMMs have a “5 ns latency penalty” compared to RDIMMs.
– CSCO UCS has a “6 ns latency penalty” compared to RDIMMs.
– NLST HyperCloud have similar latency as RDIMMs (a huge advantage) and have a rather significant “4 clock latency improvement” over the LRDIMM (quote from Netlist Craig-Hallum conference)
–
–
–
–
–
–
Which one of LRDIMM vs. HyperCloud will win for Romley:
If LRDIMMs fail to live up to expectations, it is entirely possible that Netlist’s HyperCloud could wind up dominating this 20% server market.
Fortunately for NLST, it does not have to rely on the courts to prevail – LRDIMM providers have provided a helping hand by delivering a product that underperforms in several ways compared to HyperCloud.
Netlist guidance for a possible 1% share was enough to surprise some analysts (like Rich Kugele of Needham) at Netlist’s Q3 2011 conference call.
If you scale that closer to the full 20% server market, this could be a formidable opportunity.
It is thus not surprising that recently Netlist signed an “exclusive” deal with HP (this means that HP is exclusively tied to using HyperCloud – though details are unclear at this time) and a non-exclusive deal with IBM – see references below.
–
–
–
–
–
HyperCloud and DDR4:
Netlist HyperCloud’s approach and IP superiority also places it in a good position for DDR4.
This is because the problems that occur at 2 DPC and 3 DPC for current servers will start to appear at even 1 DPC levels – because of the higher frequencies/bandwidth for DDR4.
Netlist has been pointing for a while that it’s IP intersects (and will be required) for DDR4 (see article below comparing JEDEC plans for DDR4 with Netlist’s approach).
Netlist has also stated that while for Romley their IP is valuable for this 20% server segment, for DDR4 it will be “mainstream”.
This means the market for LRDIMM/HyperCloud type solutions would expand (from the estimated 20% servers for Romley) to the full 100% of DDR4 servers – since a LRDIMM/HyperCloud type solution will be required at even 1 DPC.
–
–
–
–
–
–
–
–
References:
——–
Memory slowdown at 2 DPC and 3 DPC:
http://blog.scottlowe.org/2009/05/11/introduction-to-nehalem-memory/
Introduction to Nehalem Memory
Monday, May 11, 2009
By Aaron Delp
https://blogs.oracle.com/jnerl/entry/configuring_and_optimizing_intel_xeon
Configuring and Optimizing Intel® Xeon Processor 5500 & 3500 Series (Nehalem) Systems Memory
By John Nerl on Apr 14, 2009
http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations
04-08-2009 – Nehalem and Memory Configurations
http://blog.scottlowe.org/2009/05/11/introduction-to-nehalem-memory/
Introduction to Nehalem Memory
Monday, May 11, 2009
By Aaron Delp
http://blog.aarondelp.com/2010/02/hp-blades-offer-16gb-dimm-with-catch.html
Saturday, February 6, 2010
HP Blades Offer a 16GB DIMM, With a Catch
Aaron Delp
——–
Inphi (buffer chipset maker for LRDIMMs):
http://www.inphi.com
Inphi
IDF conference on LRDIMMs video available on main webpage:
http://lrdimmblog.inphi.com/
Webcast of HP, Samsung, ANSYS, Intel and Inphi presentation at IDF 2011 for HPC applications
Inphi webpage suggesting LRDIMM has a 5 ns latency penalty c.f. RDIMMs:
http://lrdimmblog.inphi.com/lrdimm-has-lower-latency-than-rdimm.php
LRDIMM has Lower Latency than RDIMM!
By David Wang on 08-09-2011 at 5:06 PM
quote:
—-
As described previously in other posts and in the whitepaper on the LRDIMM blog site, the buffering and re-driving of the data signals enable the LRDIMM to support more DRAM devices on the memory module, and for the entire memory module to operate at higher data rates.
The key to the LRDIMM-has-lower-latency-than-RDIMM claim lies in the fact that an LRDIMM memory system can operate at higher data rates than the comparably configured RDIMM memory system. Consequently, a higher data rate LRDIMM-based memory system can overcome the latency burden of having to buffer and re-drive the signals, and attain lower access latency compared to a lower data rate RDIMM-based memory system.
…
It shows that when operating at the same data rate, the Quad-rank LRDIMM has approximately 5 ns longer latency than the Quad-rank RDIMM. However, it also shows that the random access latency of both the LRDIMM and RDIMM memory systems decreases with increasing data rate. Consequently, when the highest-speed-bin RDIMM memory system, operating at 1066 MT/s, is compared to an LRDIMM memory system operating at 1333 MT/s, the LRDIMM memory system operating at 1333 MT/s is shown to have the lowest access latency compared to an RDIMM memory system.
—-
Introduction by Inphi exec Samer Kuppahalli to LRDIMMs:
http://lrdimmblog.inphi.com/server-design-summit-lrdimm-presentation.php
Server Design Summit LRDIMM Presentation
http://www.serverbladesummit.com/English/Collaterals/Proceedings/2011/20111129_S2-101_Kuppahalli.pdf
“Introducing LRDIMM in Servers and Workstations” at the 2011 Server Design Summit
http://www.edn.com/article/519386-Basics_of_LRDIMM.php
Basics of LRDIMM
LRDIMM is a memory module for high-capacity servers and high-performance computing platforms. It supports DDR3 SDRAM main memory, is fully pin-compatible with existing JEDEC-standard DDR3 DIMM sockets, and supports higher system memory capacities when enabled in the system BIOS.
Inphi — EDN, September 20, 2011
——–
Netlist (significant holder of “load reduction/rank multiplication” IP):
http://www.netlist.com/products/HyperCloud_landing.html
Netlist HyperCloud
http://www.netlist.com/products/ppt/HyperCloud_32GB_112011.pdf
32GB HyperCloud Brief
http://www.netlist.com/products/ppt/Tolly211119NetlistHyperCloudPerformance.pdf
Spec CPU2006 HyperCloud Benchmarks (Tolly Group)
http://www.netlist.com/products/ppt/Netlist_Sybase_WP_110911.pdf
Improving Data Analytics Performance Using HyperCloud Memory
http://www.netlist.com/products/ppt/Netlist_Virtualization_WhitePaper_Rev1_Web.pdf
Triple VM-per-Server Ratio Improvement with HyperCloud Memory
Good early introduction to Netlist IP:
http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in Servers, 11th November 2009 18:01 GMT
More recent update on Netlist IP relevance to DDR4:
http://www.theregister.co.uk/2011/11/30/netlist_32gb_hypercloud_memory/
Netlist puffs HyperCloud DDR3 memory to 32GB
DDR4 spec copies homework
By Timothy Prickett Morgan
Posted in Servers, 30th November 2011 20:51 GMT
——–
Netlist and Cisco UCS:
http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
Netlist goes virtual and dense with server memory
So much for that Cisco UCS memory advantage
By Timothy Prickett Morgan
Posted in Servers, 11th November 2009 18:01 GMT
Suggestion that Cisco UCS has a “6 ns latency penalty” compared to RDIMMs:
http://knudt.net/vblog/post/2009/10/05/UCS-Boot-Camp-Day-1.aspx
UCS Boot Camp – Day 1
by knudt October 5 2009 23:13
quote:
—-
…without affecting bus speed and only incurring minimal additional latency (6 ns)
—-
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=36202&mid=36389&tof=1&frt=2#36389
Re: Let the truth be revealed on IPHI CC .. CSCO UCS and LRDIMMs 6-Nov-11 11:31 am
http://www.eetimes.com/electronics-news/4083168/Cisco-discloses-server-ASICs
Cisco discloses server ASICs
Rick Merritt
5/19/2009 7:46 PM EDT
quote:
—-
SAN JOSE, Calif. — Cisco Systems has designed a set of proprietary ASICs to more than double the DRAM memory linked to an Intel Nehalem processor. The company aims to use the technology to leapfrog existing server makers in areas such as database performance or the number of virtual machines a server can support.
—-
http://bladesmadesimple.com/2010/01/384gb-ram-in-a-single-blade-server-how-ciscos-making-it-happen/
384GB RAM in a Single Blade Server? How Cisco Is Making it Happen (UPDATED 1-22-10)
Jan 22nd, 2010 by Kevin Houston
Mention of Cisco’s “Catalina ASIC” on their motherboard:
http://rodos.haywood.org/2009/06/nehalem-memory-with-catalina.html
Wednesday, June 10, 2009
Nehalem Memory with Catalina
Rodney Haywood
at 6/10/2009 02:28:00 PM
http://www.informationweek.com/blog/main/archives/2010/03/if_youve_got_bi.html
If You’ve Got Big Data, You May Want Big Memory
Posted by Charles Babcock, Mar 30, 2010 04:44 PM
The relevant paragraphs are:
—-
quote:
I knew Cisco blades worked with very large amounts of memory, 384 GBs a server, which would enable them to store and move large amounts of video around or subdivide the server into multiple virtual machines, each capable of moving a video stream. But I didn’t understand how. For most manufacturers, an Intel Nehalem server is going to carry 96 GBs of memory at top performance, or 144 GBs at a performance level that’s 20% off peak because it’s using less expensive and less than optimum direct, inline memory modules. Cisco sidestepped this restriction because both virtualized servers and cloud computing servers are hungry for more memory.
In 2006 Cisco started investing in Nuova Systems and in 2008, purchased the company to operate as an independent subsidiary. Nuova makes a custom ASIC that “fools each (Nehalem) chip into seeing four DIMMS as one,” which allows Cisco to pack the server with four times the memory of a standard server, according to the report, Unified Computing, Cisco and the Competition,” by the 451 Group.
This has allowed Cisco, an untested blade provider, to get its foot in the door at Taser. And it remains a feature that I don’t hear anyone talking about, including HP and IBM. Which makes me think it’s leading edge. “Competitors have said less about this feature than any other, indicating it’s the one they fear most,” wrote John Abbott of the 451 Group in his Dec. 22 report.
—-
——–
Netlist intellectual property (IP) and DDR4:
Good outline of Netlist IP advantage and relation to DDR4:
http://www.theregister.co.uk/2011/11/30/netlist_32gb_hypercloud_memory/
Netlist puffs HyperCloud DDR3 memory to 32GB
DDR4 spec copies homework
By Timothy Prickett Morgan
Posted in Servers, 30th November 2011 20:51 GMT
Check out the comments section for the above article for more info on Netlist vs. LRDIMM:
http://forums.theregister.co.uk/forum/1/2011/11/30/netlist_32gb_hypercloud_memory/
Netlist puffs HyperCloud DDR3 memory to 32GB
Posted Thursday 1st December 2011 09:38 GMT
——–
Netlist and VMware:
Netlist HyperCloud is the only other memory (other one is Kingston) certified for VMware:
http://alliances.vmware.com/public_html/catalog/searchResult.php?catCombo=System+Boards&isVmwareReadySelected=No&isServicesProduct=no&searchKey=
http://alliances.vmware.com/public_html/catalog/ViewProduct.php?Id=a045000000GQT8gAAH
16GB Hypercloud DDR3 2vR 1333
http://alliances.vmware.com/public_html/catalog/ViewProduct.php?Id=a0450000008ZdykAAC&productName=Kingston%20Memory
Kingston
VMware comments on HyperCloud value for virtualization:
http://www.prnewswire.com/news-releases/netlists-hypercloud-memory-approved-for-mds-micros-cloud-matrix-108121459.html
Netlist’s HyperCloud Memory Approved for MDS Micro’s Cloud Matrix
HyperCloud memory and MDS Micro Servers increase server utility for virtualization efficiency in the Cloud
IRVINE, Calif., Nov. 15, 2010
quote:
—-
“Netlist’s offering is unique in that it directly addresses the memory slowness that customers face on a server platform with a large amount of memory,” said Tim Myers, senior architect of VMware. “Instead of the speed being reduced, they are able to maintain the faster speeds which provides a better opportunity for customer satisfaction.”
…
Certification of HyperCloud memory modules on MDS Micro’s QUADv for the Cloud Matrix increases server performance to its full potential and enables up to 768GB of memory running at 1333 MT/s.
—-
http://www.prnewswire.com/news-releases/netlist-demonstrates-100-virtual-machines-on-a-single-standard-server-using-hypercloud-memory-at-interop-105373553.html
Netlist Demonstrates 100 Virtual Machines on a Single Standard Server Using HyperCloudâ„¢ Memory at Interop
Achieving 384GB of Memory, Netlist Improves Efficiency for Virtualization Applications
NEW YORK, Oct. 20
2010
quote:
—-
NEW YORK, Oct. 20 /PRNewswire/ — At Interop New York, Netlist, Inc. (Nasdaq: NLST) today announced its demonstration of 100 virtual machines on a single, fully loaded, 24-slot 2P server with 384GB of DRAM – highlighting its HyperCloudâ„¢ memory technology. The demonstration will showcase how companies can increase memory capacity and maximize server utilization and consolidation ratios, supporting the growing demand of virtualization applications.
…
Netlist will use a HP DL385 G7 dual socket server with AMD’s Opteron 8-core CPUs and 24 memory slots acting as the vehicle. Populated with 24 16GB, 2vRank HyperCloud DIMMs, the servers will run vSphereâ„¢ virtualization software from VMware with 100 virtual machines, Linux, and Microsoft-based host software.
—-
——–
Netlist and Nexenta OpenStorage
http://www.prnewswire.com/news-releases/netlists-hypercloud-memory-certified-with-nexentas-openstorage-software-122219888.html
Netlist’s HyperCloudâ„¢ Memory Certified With Nexenta’s OpenStorage Software
16GB HyperCloud Memory enables higher utilization and better price-performance for NexentaStor OpenStorage solutions
IRVINE, Calif., May 19, 2011
——–
Netlist and Swift Engineering (HyperCloud used on Cray CX1):
http://finance.yahoo.com/news/Netlists-HyperCloud-Memory-prnews-135478072.html?x=0&.v=1
Netlist’s HyperCloudâ„¢ Memory Streamlines Swift’s CFD HPC Sims
16GB memory module supports next-generation Computational Fluid Dynamic simulations and adds speed to aerodynamic racecar design process
Press Release Source: Netlist, Inc. On Tuesday October 18, 2011, 4:00 pm EDT
http://www.hpcwire.com/hpcwire/2011-11-29/swift_engineering_receives_idc_innovation_excellence_award.html
November 29, 2011
Swift Engineering Receives IDC Innovation Excellence Award
quote:
—-
To be recognized by the leaders in the super computer industry as an innovator is primarily the result of enjoying the best HPC resources available to small business through our partners Cray Inc., Altair, Metacomp Technologies, Platform Computing and Netlist.â€
Swift uses a compatible suite of HPC analytical and design software from Altair’s HyperWorks, Metacomp Technologies’ CFD++ and Platform Computing’s cluster management software (Platform HPC) running on a pair of Cray CX1s and Cray CX1000 systems with integrated Netlist’s (Nasdaq: NLST) HyperCloud memory to further develop aerodynamic designs for its next generation Formula Nippon race car and Unmanned Aerial Vehicles (UAVs).
—-
——–
Inphi LRDIMM vs. Netlist HyperCloud:
http://finance.yahoo.com/news/Netlist-HyperCloud-Technology-iw-1971535964.html?x=0
Netlist’s HyperCloud Technology Faster Than LRDIMM on Next Generation Servers: Testing Validates the Speed Advantage of HyperCloud
Patented HyperCloud Technology Enables 1333 MT/s Memory Speeds on Future Intel(R) Xeon(R) E5 Family Based Two-Processor Servers While LRDIMM Only Enables 1066 MT/s
Press Release: Netlist, Inc. – Tue, Dec 13, 2011 6:00 AM EST
http://finance.yahoo.com/news/HyperCloud-Achieves-Server-iw-3376256974.html?x=0&l=1
HyperCloud Achieves Server Memory Speed Breakthrough at SC11
Demonstration Highlights HyperCloud’s Advantages Over Commodity RDIMM, LRDIMM
Press Release: Netlist, Inc. – Wed, Nov 16, 2011 4:00 PM EST
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=41242&mid=41261&tof=1&frt=2#41261
Re: LRDIMM Inability to run at 1333MHz Defeats the purpose 15-Dec-11 02:10 pm
The Inphi pdf document shows LRDIMMs achieving at most 1066MHz at 768GB:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=40867&mid=40951&tof=5&frt=2#40951
Re: NLST vs IPHI 10-Dec-11 08:00 pm
quote:
—-
Here’s another interesting fact (which I could not verify with NLST).
Take a look at the following IPHI’s presentation:
http://www.serverbladesummit.com/English/Collaterals/Proceedings/2011/20111129_S2-101_Kuppahalli.pdf
Browse to slide 18.
You can see that 32GB LRDIMM can not run 768GB at full speed of 1333MHz! They are only capable running that speed with 512GB populated server (16 LRDIMM populating 2 DIMMs per Channel).
—-
Comments by Netlist VP Chris Lopes indicating a “4 clock latency improvement over the LRDIMM”:
http://www.netlist.com/investors/investors.html
Craig-Hallum 2nd Annual Alpha Select Conference
Thursday, October 6th at 10:40 am ET
http://wsw.com/webcast/ch/nlst/
quote:
—-
Question:
at the 23:35 minute mark:
(unintelligible)
Chris Lopes:
Inphi (IPHI). Good question. How is HyperCloud different from what IPHI is offering.
IPHI is a chip company – so they build a register.
The register is then sold to a memory company.
And the memory company builds a sub-system with that.
And that’s the module they are calling an LRDIMM or Load-Reduced DIMM.
The difference is that the chip is one very large chip, whereas we have a distributed buffer architecture, so we have 9 buffers and one register.
Our register fits in the same normal footprint of a standard register, so no architectural changes are needed there.
at the 24:35 minute mark:
And our distributed buffers allow for a 4 clock latency improvement over the LRDIMM.
So the LRDIMM doubles the memory. HyperCloud doubles the memory.
LRDIMM slows down .. the bus. HyperCloud speeds up the bus.
So you get ours plugged in without any special BIOS requirement.
So it plugs into a Westmere, plugs into a Romley, operates just like a register DIMM which is a standard memory interface that everyone of the server OEMs is using.
The LRDIMM requires a special BIOS, special software firmware from the processor company to interface to it.
And it’s slower.
Does that answer your question ?
—-
——–
HP (exclusive), IBM deals with Netlist:
http://finance.yahoo.com/news/Netlist-HyperCloud-Technology-iw-1971535964.html?x=0
UPDATE 1-Netlist signs deals with IBM, HP
Mon Nov 14, 2011 5:19pm EST
——–
MetaRAM, Netlist and LRDIMMs:
See the extensive comments section in this blog post (warning: huge web page):
https://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/
Google to Upgrade its Memory? Assigned Startup MetaRAM’s Memory Chip Patents
By Bill Slawski, on November 20th, 2009
http://www.redorbit.com/news/technology/1815585/netlist_announces_settlement_of_patent_infringement_lawsuits_with_metaram/?source=r_technology
Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
January 28, 2010
quote:
—-
Under the terms of the settlement, filed in U.S. District Courts in Delaware and Northern California, MetaRAM will not sell, offer to sell, release, or commercialize the MetaRAM DDR3 controllers in the U.S. or outside the U.S. Netlist contended that MetaRAM’s DDR3 controllers and memory modules incorporating such controllers infringed its U.S. Patent No. 7,289,386, entitled “Memory Module Decoder.†A provision in the settlement protects Netlist if another company purchases MetaRAM’s patent and attempts to seek action against Netlist in the future.
“We are pleased to have successfully resolved this case,†said C.K. Hong, President and CEO of Netlist. “As the pioneer of this technology, the results of this settlement clearly underscore Netlist’s fundamental patent and product leadership. Netlist’s HyperCloud product-line embodies this foundational technology and Netlist remains committed to protecting its portfolio of intellectual property.â€
—-
http://blogs.wsj.com/venturecapital/2009/07/08/turning-out-the-lights-semiconductor-company-metaram/
July 8, 2009, 5:46 PM
Turning Out The Lights: Semiconductor Company MetaRAM
http://www.theregister.co.uk/2008/02/25/weber_metaram/
MetaRAM double stuffs servers with memory
256GB box for $500k $50k
By Ashlee Vance in Mountain View • Get more from this author
Posted in Servers, 25th February 2008 20:25 GMT
http://venturebeat.com/2008/08/19/idf-intel-gets-behind-start-up-metarams-server-memory-solution/
IDF: Intel gets behind start-up MetaRAM’s server memory solution
August 19, 2008 | Dean Takahashi
——–
Texas Instruments (TI) lack of interest in LRDIMMs:
http://www.veracast.com/stifel/tech2011/main/player.cfm?eventName=2133_inphic
Stifel Nicolaus
Technology, Communications & Internet Conference 2011
Inphi Corporation
2/10/2011; 4:25 PM
Mr. John Edmunds
Chief Financial Officer
quote:
—-
TI is not developing an LRDIMM to our knowledge and .. uh .. their interest level seems to wax and wane at times.
—-
detailed quote:
—-
at 24 minute mark ..
competitive landscape
in servers .. because of the qualification cycles. . there really are some incumbent competitors ..
like IDT and TXN (Texas Instruments) ..
and so the 3 of us tend to split the market.
IDT and Inphi would probably share 80% of the market.
TXN would be somewhere in 10-15% range.
TI is not developing an LRDIMM to our knowledge and .. uh .. their interest level seems to wax and wane at times.
We go head to head with IDT – we respect them as competitors and we think market is going to want multiple suppliers.
It’s not a market that somebody from outside can come into easily just because of the long qualification cycles and the fact these are getting deployed across a wide range of SKUs (stock keeping units ?) .. the OEMs that (unintelligible) the memory module makers don’t want to qualify multiple suppliers because they have to deploy them across a wide set of SKUs ..
—-
quote by Inphi about the market for LRDIMMs being 20% of Romley:
—-
at 28:45 minute mark ..
what do we expect attach rate to be for LRDIMM ?
so this is sort of the $64,000 question .. uh .. you can talk to some people – depends you’re talking to end-uesr or someone who’s on the other side of the .. chip design .. some people who believe it’s 5-8% attach rate or 5-10% attach rate.
I think that we can show the power signature of LRDIMM was the same as an RDIMM, those people tend to just gravitate up in terms of maybe the higher end of that range – in terms of what their expectation of attach rate was.
As we were out on the road show, Young (?) is optimistic – I think he believes that .. uh .. 20% of the server market is high-end and memory intensive and that the attach rate will ultimately be about 20%.
It will take 6-8 quarters to sort of phase into that .. that level of volume consumption.
You know we got a call on the conference call from one of the analysts that they had heard that it could be as high as 30 or 35%.
I talked to somebody that actually spoke to the CIO where that was quoted from – and this CIO is very familiar with LRDIMM and he is a big financial institution CIO and his view was that anyone who was going to implement the next-generation VMware or try to do a virtualization implementation was gonna want LRDIMM in the configuration and for that reason he believed the attach rate would be more like 30 or 35%.
So when you are trying to gauge the demand here I think it is important to talk to .. uh .. data center end-users. That’s just one data point – may not be accurate .. uh .. for us anything over say mid single digits is is gravy relative to the street forecast today.
—-
——–
IDTI deemphasizing LRDIMM over time:
Rich Kugele of Needham and Company comments on Netlist’s Q3 2011 conference call stated that IDTI and Texas Instruments seem to have exited the LRDIMM space:
http://viavid.net/dce.aspx?sid=00008EC4
Netlist Third Quarter, Nine-Month Results Conference Call
Thursday, November 10, 2011 at 5:00 PM ET
quote:
—-
at the 17:35 minute mark:
Rich Kugele at Needham:
…
Uhm .. you know I just want to talk a little bit about the LRDIMM market as it relates to Romley.
And uh .. or the next-gen from Intel.
Uhm .. can you just talk about how big that market is – I know that in recent months we’ve seen a few competitors actually exit that ..
.. space .. uh .. from TI (Texas Instruments) and IDTI .. uh .. just outright ..
.. difficult time figuring out how many units that market actually is and how competitive the solutions are or aren’t.
—-
——–
—-
Analysis of LRDIMM latency vs. CSCO UCS vs. RDIMM and NLST HyperCloud.
Taken from this post:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=42437&mid=42448&tof=1&frt=2#42448
Re: HC potential based on IPHI data .. latency LRDIMM HyperCloud RDIMM 13-Jan-12 10:02 am
—-
My understanding from earlier is that HyperCloud has an identical or similar latency as RDIMM (which would be pretty amazing – compared to what LRDIMMs have been able to do).
So let’s see if that impression needs to be changed in light of new data or for newer updates of HyperCloud.
The info (nanoseconds and clock cycle quotes) we have from NLST conference calls and Inphi LRDIMM blog – which are talking about nanoseconds and clock cycles – we need to reconcile these two units of measure since they are two separate things and see what it suggests.
https://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-420587
netlist
01/13/2012 at 7:33 am
quote:
—-
So in summary if you compare the latency advantages for Netlist:
– LRDIMMs have a “5 ns latency penalty” compared to RDIMMs (from Inphi LRDIMM blog)
– CSCO UCS has a “6 ns latency penalty” compared to RDIMMs.
– NLST HyperCloud have similar latency as RDIMMs (a huge advantage) and have a rather significant “4 clock latency improvement” over the LRDIMM (quote from Netlist Craig-Hallum conference)
—-
RDIMMs have a 1 cycle latency compared to UDIMMs – and RDIMMs are standard for servers etc. So the comparison is with RDIMMs.
From this article:
http://www.tomshardware.com/reviews/ddr3-1333-speed-latency-shootout,1754-3.html
Speed Vs. Latency: Myths And Facts
10:40 AM – January 4, 2008 by Thomas Soderstrom
one can see that the relationship between clock cycles and nanoseconds is:
DDR-333 – CAS 2 – 2 clock cycles – 6ns per cycle
DDR-667 – CAS 4 – 4 clock cycles – 3ns per cycle
DDR-1333 – CAS 8 – 8 clock cycles – 1.5ns per cycle
Using this quote (from Inphi LRDIMM blog):
– LRDIMMs have a “5 ns latency penalty” compared to RDIMMs.
Now “5 ns latency penalty” vs. RDIMMs would fit in “4 cycles” (and not in 3 cycles) since:
– 5ns/1.5ns = 3.33 clock cycles – which will fit in 4 cycles
So that translates to:
– translates to LRDIMMs have a 4 cycle latency penalty compared to RDIMMs.
Since we know that NLST has said that NLST HyperCloud have a “4 clock latency improvement” over the LRDIMM (quote from Netlist Craig-Hallum conference).
That means NLST HyperCloud latency cannot be too different from RDIMMs (possibly the same latency and at worst not more than 1 cycle more than RDIMMs). So even by analyzing these quotes it suggests:
– that NLST HyperCloud has same or very similar latency to RDIMMs
Which is amazing – given the complexity of the technology which Intel and their “MetaRAM”-like proxy i.e. Inphi have not been able to do.
Inphi LRDIMM problems on SuperMicro:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=42437&mid=42483&tof=1&frt=2#42483
Re: HC potential based on IPHI data .. Inphi LRDIMM problems on SuperMicro 7 second(s) ago
Inphi LRDIMMs on SuperMicro – trouble getting above 1066MHz at 2 DPC or 3 DPC, and 32GB LRDIMM not working at 3 DPC (on some boards perhaps).
http://www.supermicro.co.uk/support/faqs/results.cfm?id=22
This one confirms that “LRDIMMs require BIOS update”:
http://www.supermicro.co.uk/support/faqs/faq.cfm?faq=13039
Question
Board is x8dah+-F. I have purchased the 16GB memory modules hynix hmt42gl7bmr4a-h9 LRDIMMS according to the board compatibility chart but the board beeps with a memory error. I put in regular 4GB DDR3 memory and it boots fine. Please let me know what I need to do to get this working as I need a full 288GB of memory.
Answer
LRDIMMs will only work if the board has been flashed with the LRDIMM-enabled BIOS. However, RDIMMs and UDIMMs still work with this BIOS. Please contact Tech Support to get LRDIMM – Enable BIOS.
If the user’s board has the standard BIOS loaded, then you need to use UDIMMs or RDIMMs in the board to boot the system to video and then flash to the LRDIMM-enabled BIOS. After that, you can populate the LRDIMMs. If the user does not have access to UDIMMs or RDIMMs, he will have to RMA the board.
People having problems getting 1333MHz with LRDIMMs:
http://www.supermicro.co.uk/support/faqs/faq.cfm?faq=13005
Question
We need to get 1333MHz working frequency, so the normal modules are not fit.
Do you mind LRDIMM will meet the requirement?
My board model is X8DAH+-F
Answer
If you want to use LRDIMM please refer following support info. the memory speed will up to 1066MHz only.
• 16 GB LRDIMM can run up to 3 DPC (total board capacity of 288 GB) at speeds up to 1066 MHz. Reducing the number of DPC will not increase the speed. LVDIMMs are not currently supported.
• 32 GB LRDIMM can run up to 2 DPC (total board capacity of 384 GB) at speeds up to 1066 MHz. Reducing the number of DPC will not increase the speed. LVDIMMs are not currently supported.
If you want to support LR DIMM you need flash BIOS(LR-DIMM support) first.
http://www.supermicro.co.uk/support/faqs/faq.cfm?faq=12955
Question
Please verify if the X8DTU-6TF+ motherboard needs to use 16GB Load Reducing (LR) dimms in order to max out the board to 288GB.
Answer
We wanted to eliminate confusion, so we’re only supporting LRDIMMs by default on the “-LR†SKUs. For example: http://www.supermicro.co.uk/products/motherboard/QPI/5500/X8DTU_.cfm?TYP=SAS&LAN=10&LRDIMM=Y
• LRDIMM (Load Reduced DIMM, for X8DTU-6F+-LR and 8XDTU-6TF+-LR Only)
• DDR3 ECC 1066 MHz memory with support of up 288 GB in 18 slots
Warning: For your system memory to work properly, be sure to use the correct BIOS ROM for your system.
For the X8DTU+-6F+, use the X8DTU+-6F+BIOS. For the X8DTU+-6F+- LR, use the X8DTU+-6F+-LR BIOS.
For the X8DTU+-6TF+, use the X8DTU+-6TF+ BIOS. For the X8DTU+- 6TF+-LR, use the X8DTU+-6TF+-LR BIOS.
To flash the BIOS, refer to http://www.supermicro.co.uk/products/motherboard/QPI/5500/X8DTU_.cfm?IPMI=Y.
http://www.supermicro.co.uk/support/faqs/faq.cfm?faq=12866
Question
We have a server X8DTU-6TF+ with the latest BIOS. Every time we install the 16GB memory LR-DIMMS, the system will not boot. Is there a BIOS to fix this issue?
Answer
Yes, please request a LRDIMM BIOS from technical support dated 7/14/11, until it is posted online.
http://www.supermicro.co.uk/support/faqs/faq.cfm?faq=12571
Question
How many MEM-DR332L-CL01-LR10(32GB DDR3-1066) can I install on X8DTU-LN4F+ with one CPU only?
Answer
With the BIOS for LR-DIMM, you can install up to six MEM-DR332L-CL01-LR10 with one CPU and the speed will be fixed at 1066MHz.
http://www.supermicro.co.uk/support/faqs/faq.cfm?faq=12551
Question
what is the memory speed if I install 18 pcs MEM-DR316L-HL01-LR13(Hynix HMT42GL7BMR4A-H9) in SYS-1026T-6RF+ to support 288GB memory size in total?
Answer
Memory speed will be fixed at 1066Mhz.
Some more info from the recent Inphi Needham conference (Jan 10-12, 2012) that corroborates comments here:
14th Annual Needham Growth Conference
New York Palace Hotel, New York
January 10-12, 2012
This Inphi conference call is very interesting – for the first time we are hearing Inphi making angular references to NLST HyperCloud (see below).
The conference confirms several analysis points we have made here.
– Inphi is the only provider for LRDIMMs at Romley rollout (IDTI and Texas Instruments are out – as analyzed here) – however Inphi suggests IDTI may return by end of 2012
– confirms March 2012 Romley launch (earlier we mentioned Feb 2012)
– confirms that maybe only the 32GB LRDIMMs may actually be sold (IPHI suggests if that is so share drops from 10%-30% for LRDIMM/HyperCloud market to “5% of less” for the 32GB LRDIMMs if they just sell 32GB LRDIMMs)
I have a feeling no one in their right mind would buy LRDIMMs – because of their significant “latency issues” and the fact that the 16GB LRDIMMs will not be viable (or visible because of lack of marketing by HP/Samsung).
In such a scenario we may see HyperCloud make an entry via the 16GB – i.e. gain visibility (since LRDIMMs will be non-viable/absent) – and thus take over the 16GB, 32GB segments (assuming 8GB may not be viable if 16GB are dropping in price because 4Gbit DRAM dies are being pushed by DRAM makers).
Noted the Inphi CFO voice thicken on a few occasions – notably around the “big launch” around Romley.
While Inphi WILL probably sell buffers (just like IDTI) for RDIMM, they may not see the numbers for LRDIMM (that they were basing their IPO on).
For Inphi the difference between selling an RDIMM buffer and an LRDIMM buffer is $2 to $20 (i.e. a $18 increase). The Inphi LRDIMM buffer chipset is then used by memory module makers like Samsung, Micron and others.
For NLST the margins per module are greater, since NLST makes the whole memory module (not just a buffer chipset to be sold to memory module marker).
About Romley rollout:
quote:
—-
at the 03:05 minute mark ..
Uh .. so we do expect to see inventories building towards release and then .. uh .. a fairly rapid ramp to Romley.
—-
This suggests that NLST may be utilizing it’s cash to build inventory as well (which would explain the recent “at the market” sale of $10M out of the earlier announced total of $40M – though we cannot be sure if the $10M was sold yet and would be known at next earnings).
On RDIMM vs. LRDIMM margins ($2 vs. $20) – 16GB at $500 or so and 32GB LRDIMMs would sell at $1200:
quote:
—-
at the 22:35 minute mark ..
That’s right .. uh .. and then depends on how much memory they are adding.
For the memory cards can be anywhere from $500 or if it was a 32GB memory card right now would sell for about $1200.
And .. so the difference between an RDIMM and an LRDIMM is the difference between a little over $2 and $20 – so incrementally it is about $18 right now.
And we are not seeing the memory module guys put a premium on that.
They could but we are not seeing that yet.
—-
On LRDIMM/HyperCloud market (10%-30% of Romley servers) and the possible market if you restrict to just 32GB LRDIMMs being sold (“5% or less” of Romley servers):
quote:
—-
at the 13:25 minute mark ..
Uh .. we’ve generally put a number out at about 10% – other people think as much as 30% of the market might be converting to virtual machines – and so the the usage could be as high as that.
at the 13:28 minute mark:
Other people feel like it is only 32GB memory cards and those will be lower – in the 5% or less sort of category in terms of adoption.
(NOTE: compare this to the analysis on the board about HP/Samsung comments about not pushing 16GB memory modules because can’t out-perform the 16GB RDIMM 2-rank based on 4Gbit DRAM dies, and only 32GB LRDIMM maybe pushed because 32GB RDIMMs are 4-rank currently – from HP/Samsung comments at IDF conference on LRDIMMs on Inphi LRDIMM blog main webpage).
—-
You can see them essentially confirming HP/Samsung comments (see the NOTE: above) about only pushing 32GB LRDIMM.
So you can see that while the market for LRDIMMs/HyperCloud is 10%-30% of Romley servers, the market for LRDIMMs – in practice – is just going to be the 32GB LRDIMM market – which may be very small. Inphi is confirming that figure may only be “5% of less” of Romley servers.
Inphi CFO even goes out on a limb and says the industy will configure around LRDIMMs for DDR4 (if they can’t get it right for DDR3, what sort of confidence will it inspire for DDR4 – see the article “Netlist puffs HyperCloud DDR3 memory to 32GB” for confirmation that NLST IP intersects with DDR4):
quote:
—-
at the 13:48 minute mark:
Uh .. longer term we think the industry is very interested in .. uh .. configuring around LRDIMM with respect to DDR4.
So we believe there will be wider adoption as we move forward into newer technologies.
—-
Inphi CFO does not seem to understand the LRDIMM technology – or got confused by LRDIMM need because of capacitance/memory load – and confused it with the need to have more memory as you increase CPUs in order to keep the same memory per processor core ratios) – so he says as you add CPUs (should be memory modules) then the capacitance issue:
quote:
—-
at the 07:47 minute mark ..
Gives you an idea that .. as you add CPUs to any existing configuration, because of the .. uh .. bus .. you are going to be limited .. because of a concept called capacitance to the amount of memory.
So as you add CPUs you actually have less memory available per CPU.
And that creates contention for a CPU to have to access and be able to access a lot of data for each given application.
So for memory intensive applications it’s almost choking them off .. in that sense.
Um .. LRDIMM is really designed to de-couple the memory stack from the CPU, so you can scale memory independently of the CPU.
And so you can have a consistent amount of memory available, even as you are adding CPUs to the architecture.
—-
In addition Inphi CFO seems to state that LRDIMMs have a “latency advantage” over RDIMMs – when that clearly is false (perhaps he is misled by the title of the Inphi LRDIMM blog post “LRDIMM has Lower Latency than RDIMM!” which actually concedes a latency penalty and then shows how it could be mitigated if RDIMM are running slower):
quote:
—-
at the 11:45 minute mark ..
This is a slide we presented at IDF – actually HP had presented this and it gives you an idea that there ARE advantages to LRDIMM in terms of the amount of capacity that’s available and the latency advantages vs. an RDIMM configuration (how are there “latency advantages” vs. RDIMM when LRDIMM has “latency issues” and can’t have better latency than RDIMM).
—-
Inphi CFO says benchmarking companies should be publishing results prior to launch of Romley:
quote:
—-
at the 12:35 minute mark:
So these are being validated now by third-party benchmarking companies and you’ll see these published shortly – I believe around the time of the (Romley) launch .. uh .. so that’s .. uh .. that’s important to us.
—-
Inphi CFO expressing some wariness in CIOs of companies validating LRDIMMs:
quote:
—-
at the 23:35 minute mark ..
But we need the CIOs to get out there and validate and see that.
So that’s why we think Romley will grow dramatically on it’s own, but LRDIMM will lag behind a bit before the CIOs validate it and then we think that order rates (?) will come in with LRDIMM as well.
—-
So this is despite all their talk of validation by Intel (INTC) which may just refer to the earlier Inphi/INTC PR about Inphi’s “iMB” buffer chipset, they seem to be behind on the validation of the actual memory modules by companies that buy the memory modules.
Recall that for Inphi the validation process is more complex:
– validation of the Inphi “iMB” buffer chipset by Intel (INTC)
– validation by memory module markers who use that to make LRDIMMs (NLST doesn’t have this step)
– validation of those memory modules by the OEMs (and CIOs that Inphi mentions)
Inphi CFO on the barriers to entry – once you are through the long qualification cycle:
—-
at the 25:15 minute mark ..
The big hurdle in this market and the reason it is closed in a sense is that you have to go through so much qualification with Intel (INTC), then you got to go through qualification through the module maker (memory module maker who buys buffer chipsets from Inphi) – he has to submit it to the OEM and they all have to get happy that they can accept your product on their memory card.
It’s a big deal to them because they just don’t want to have to address it later – they don’t want to have to go back to this – so that’s why the qualification cycle tends to be so long.
And you know people have to wade their way through that and succeed in getting throut it.
—-
Here Inphi CFO is suggesting that the big OEMs (HP/IBM/DELL) have qualified LRDIMMs for many of the motherboards:
quote:
—-
at the 24:00 minute mark ..
Question:
Have the big 3 OEMs .. have they already qualified these LRDIMMs ?
Answer:
Uh .. yeah. They’ve already .. all qualified LRDIMMs for their particular products – I think about 80% of the systems they will be introducing will be LRDIMM capable and .. uh .. they’ve had versions of LRDIMM for quite some time .. uh .. to validate so .. that hasn’t been as much of a hangup .. uh .. they want to make sure Intel (INTC) stands behind the whole thing and INTC – while they had validated for quite some time, I think just published .. August or September (2011) they started publishing the actual list and say ok we’ll stand up and take responsibility.
—-
However, this comment may just refer to the LRDIMM support in the motherboards – since LRDIMMs require BIOS updates with current servers, and as NLST mentioned that for Romley, they are working on the various BIOS updates required to support LRDIMM.
So this answer is misleading as it does not answer how many motherboards have validated LRDIMMs – just that “80% of the systems they will be introducing will be LRDIMM capable” – meaning the other 20% will not even have the BIOS updates to support LRDIMM.
It could be that these 20% are the servers which have smaller number of slots – and so not require LRDIMMs (since will be no high memory loading) – or they could be motherboards which are not bothering to do BIOS updates for to support LRDIMMs (in which case HyperCloud would be the only one working on those boards).
Inphi CFO confirming that IDTI is out of the LRDIMM space:
quote:
—-
at the 24:50 minute mark ..
It’s hard to say – IDTI we know has been developing a product .. uh .. and so they .. uh .. they could come out.
They don’t .. they don’t talk actively about aggressively pursuing the market so they may wait until Ivy Bridge – it’s more convenient to do this to do around the next platform launch than it is to do it in between platforms.
—-
Inphi CFO talking about HyperCloud (“smaller in the space overall” – since NLST doesn’t sell RDIMM buffer chipsets while Inphi does):
quote:
—-
at the 25:00 minute mark ..
Uh .. so there one .. there is .. are other guys that will try that are smaller in the space overall and we’ll have to see whether they can succeed or not (NOTE: here seems are talking about NLST HyperCloud).
—-
Some insight into the Intel roadmap (Romley and Ivy Bridge after that will be DDR3 and Hazwell after that will be DDR4):
quote:
—-
at the 26:05 minute mark ..
Ivy Bridge will be in 2013 – it’s the talk if you will after Romley comes out is the tick (?).
It’s the next big platform .. uh .. after Ivy Bridge will be the Hazwell platform and that will come with DDR4. And that will be sometime in 2014.
…
at the 26:35 minute mark ..
Uh .. yes .. it (Ivy Bridge) will still be based on DDR3 – in both RDIMMs and LRDIMMs.
—-
IDTI comments on LRDIMM prospects for Sandy Bridge (Romley) ..
Jan 30, 2012 at 1:30 PM PT
IDT Third Quarter Fiscal Year 2012 Financial Results
http://ir.idt.com/eventdetail.cfm?EventID=107803
at the 05:05 minute mark ..
In servers, Intel’s Romley platform is slated to launch later this quarter, creating demand for IDTI’s new timing solutions, DDR3 memory interfaces and temperature sensors.
at the 05:15 minute mark ..
Our LRDIMM product is being optimized to accomodate 3 DIMMs per channel at 1600MHz – which is a sweet spot for adoption in future platforms.
We will be qualified later this year with Intel’s Ivy Bridge processor, which is expected to launch in 2013.
The timing of that qualification should also enable us to catch the delayed Sandy Bridge ramp.
The biggest opportunity of course is the DDR3 to DDR4 transition in 2014, for which IDT already has a product in development.
Question & Answer session ..
at the 20:30 minute mark ..
…
at the 39:30 minute mark ..
Joanne Finney of Longbow Research:
Hi, thanks. Good afternoon guys.
Um .. a question on the computing side as well, regarding servers.
The first – one question on do you have any sense of whether you have different market share on servers built by the OEMs like HP, IBM etc. vs. those more custom built .. uh .. servers.
And secondly, do you have a sense of of how much market share the LRDIMM is likely to take this year on the Q2 (?) Romley systems.
And then I have a followup.
IDTI exec:
Hi, I’m sorry Joanne, I missed the last part of the question.
Joanne Finney of Longbow Research:
Oh, I was just wondering what what your CURRENT assessment was of the share of DIMMs that are likely to be LRDIMMs this year.
I understand you don’t have a solution until NEXT year – so I was wondering how much you thought you are vulnerable to that loss of opportunity this year.
IDTI exec:
Ok, good question.
So, on the first part as far as our share at OEMs vs. custom servers .. other new data centers that are coming along.
MOST of the revenue that we are talking about today is being sold to .. uh .. the DIMM manufacturers – the leading memory module manufacturers .. uh .. to whom (we) are selling the memory interface product.
In addition to that, when we talk about our server busines, we are also selling PCI Express switches, signal integrity products, timing solutions and so forth .. to the OEMs that are building the server boxes.
And most of THOSE are going today to the standard household names that we all know and love that build servers.
The custom guys .. uh .. are really a new customer set for us .. who we are beginning to engage actively with .. uh .. and I expect that to be an increasing portion of our revenue in the future.
But when we talk about our server revenue today, it is mostly the memory module manufacturers and the household server names.
at the 41:45 minute mark ..
For the second part of your question with respect to LRDIMM.
We have been very consistent in our .. uh .. discussion of the size of the LRDIMM market.
We believe .. that in the Sandy Bridge .. uh .. generation of Romley .. uh .. that the attach rate for LRDIMM will be small.
It will be probably 2 or 3 percent (2%-3%) of all of those Romley .. of all of those servers.
Now, remember Intel’s got this (hype ?) talk strategy .. uh .. so the tech is the Sandy Bridge and then there is a die-shrink which is the talk .. which is Ivy Bridge.
Now, Ivy Bridge is 1600MHz, whereas Sandy Bridge is only 1333MHz.
Ivy Bridge also allows for 3 DIMMs per channel (3 DPC), whereas Sandy Bridge only allows for 2 DIMMs per channel (2 DPC) (NOTE: probably mean at full speed).
at the 42:40 minute mark ..
So if you go through the analysis .. which I am not going to bore you with here .. and you look at the benefits of LRDIMM in Sandy Bridge, the cost-performance tradeoff is not .. uh .. not very favorable.
It turns out – now just give you the answer .. uh .. that you can build a DIMM using .. uh .. uh .. 64 .. I’m sorry 4Gbit DRAM and standard Registered DIMM (RDIMM) that has .. really a lower cost and roughly equal performance to what you would get with LRDIMM – that’s why the attach rates for LRDIMM in Sandy Bridge is relatively small.
The the only place where LRDIMM will give you a performance tradeoff in the Sandy Bridge generation is in the 32GB DIMMs, not in the 16GB DIMMs.
So the 32GB DIMMs are only about a 2-3% of the total market.
at the 43:45 minute mark ..
That that .. that’s the explanation for why that attach rate is small.
Now go to Ivy Bridge where you’ve got 1600MHz (and) 3 DIMMs per channel (3 DPC) – go through the same analysis – it is MUCH more favorable for LRDIMM.
And so we anticipate that in the Ivy Bridge generation, the attach rate will be 15%-20%.
at the 44:05 minute mark ..
But that .. that’s a long winded answer .. uh .. but there’s actually some careful analysis that goes behind our .. our market size estimates.
So .. it’s for that reason that we deemphasized our LRDIMM for Sandy Bridge and focused instead on developing a product that could meet the Ivy Bridge performance specifications .. in 1600MHz.
And THAT product will be .. uh .. sampled and hopefully qualified .. this year.
As it so happens, the Sandy Bridge ramp has been delayed such that we may also be able to participate in THAT .. uh .. market, although it is relatively small for the reasons I’ve just described.
So .. hopefully that answers your question .. sorry for the long-winded explanation.
Joanne Finney of Longbow Research:
No, no, I think that was a great answer. There certainly are a lot of questions that float around about that from many quarters .. so ..
at the 45:05 minute mark ..
If I could .. a followup .. you you just (see ?) higher than expected server business last quarter and you also mentioned some share gain, so first could you explain where that .. uh .. extra business came from .. was because of some of these pre-shipments of Romley-based .. uh .. systems, or was it the older technology which I’ve heard from other quarters did a little better last quarter than people expected.
IDTI exec:
The primary reason in my opinion why we captured more market share is because we have the lowest power Registered DIMM memory interface on the market – for DDR3.
Far lower than any of our competitors.
Uh .. and that has led to an increase in IDT’s share.
It is also important to mention that while some of our competitors may have been distracted by other markets, IDT has really remained focused on the mainstream high volume markets.
And today that’s DDR3 .. uh .. in 2014 it’s going to be DDR4 – we are already developing products for that.
But because of our .. uh .. focus on DDR3 and in not getting distrated by some of these niche markets .. uh .. we’ve been able to improve our performance, lower our power and grow our marketshare.
at the 46:30 minute mark ..
Now .. going forward .. uh .. we’ll also have additional upside because of the Romley transition.
We we think that’s still in front of us.
Uh .. so that’s additional share that we we hope to be able to gain .. but the primary reason for our share gain in December was because we are selling more of our low power .. uh . DDR3 memory interfaces.
Joanne Finney of Longbow Research:
Would you say then that all of the upside in December was because of the share gains, or was there also an aggregate increase in the .. in the server shipments – in the market as a whole.
IDTI exec:
I believe we’re seeing some early volume and some minor amount of early volume in preparation for the Romley launch.
Um .. but most of it is simply due to the fact that we’ve got a better product.
…
Commentary on the Inphi comments at the Stifel Nicolaus conference on Feb 7, 2012 (partial transcript posted above) ..
–
–
While 32GB as mentioned elsewhere may occupy 2%-5% of the market in 2012 and grow upwards from there, Inphi describes the 16GB market is a “healthy chunk” – this is an area in which LRDIMMs will not compete according to HP/Samsung cmoments from IDF conference on LRDIMMs video on Inphi LRDIMM blog main webpage.
quote:
—-
at the 2:50 minute mark ..
John Edmunds – CFO:
Whereas the majority of the market today would sit in .. uh .. 8GB memory cards. Pretty healthy chunk in 16GB and then 32GB are fairly nascent – you might see those become .. oh 2-5% of the market in 2012 and then continue to grow as we move towards 2013, 2014 and 2015.
—-
–
–
–
Inphi quotes HP comments on LRDIMM as if they were a positive endorsement – while HP/Samsung said at IDF conference on LRDIMM video on Inphi LRDIMM blog main webpage that they would not push 16GB LRDIMMs (not competitive against 16GB RDIMMs 2-rank using 4Gbit DRAM) but would sell 32GB LRDIMM.
–
–
quote:
—-
And that’s the advantage of .. uh .. of LRDIMM coming in this space.
So this is a slide .. uh .. that HP showed at the .. uh .. IDF conference in September (2011) – it talks about the advantages of LRDIMM.
—-
–
–
Inphi is forecasting sales of RDIMMs from initial pent-up demand for Romley, and LRDIMMs from second half of the year:
quote:
—-
at the 7:40 minute mark ..
We envision a growth in LRDIMM happening in the second half of this calendar year.
So we think there will be two way for growth for us this year.
The first way will just be from the introduction of Romley – in the first half of the year – we are very comfortable with pent up server demand and some gain in share that the RDIMM product line will drive growth for us through the first half of this year.
–
–
–
at the 8:05 minute mark ..
We think CIOs will then be able to test and evaluate Romley systems with LRDIMM configurations in the first half and the reorder rate should step up for LRDIMM in the second half for LRDIMM.
And that should drive additional .. uh .. sequential growth for us in the third and fourth quarters for us in this calendar year (Q3 2012 and Q4 2012).
–
–
So again, sort of two waves of growth – an initial wave from Romley (and RDIMM sales) and a second wave from LRDIMM.
—-
–
–
–
–
Inphi saying that “for sure” the 32GB market will belong to LRDIMM – estimating at 2%-5% of the market, but he is less sure on the 16GB market.
quote:
—-
at the 8:25 minute mark ..
The $64,000 question is how much of the RDIMMs market is going to convert over to LRDIMM.
Umm .. and you know that will happen for SURE on 32GB memory cards – people estimate that to be anywhere from 2%-5% of the market.
Uh .. it will happen on some 16GB cards .. uh .. and .. you know we don’t know quite how many just yet.
—-
–
–
–
Inphi saying “only way” to do 32GB or 64GB is with an LRDIMM – “only way” if you infringe on NLST IP and then assume they are not competitors:
quote:
—-
at the 8:45 minute mark ..
And again, if you look over time .. uh .. if we just get back to this slide for a second, you’ll you’ll see the need for LRDIMM continue to grow, because the only way to do a 32GB or 64GB card is with an LRDIMM device.
—-
–
–
–
Inphi again on 16GB being iffy – but is notably not clear about WHY 16GB is so unapproachable for Inphi:
quote:
—-
at the 9:00 minute mark ..
Uh .. and we believe that it CAN be economical and .. uh .. applicable in 16GB environment as we move forward as well .. uh .. but it will take up more tuning .. uh .. in the generations to be able to get to that
(NOTE: meaning they ARE seeing some problems in the 16GB environment – however that tuning issue – from “high latency” are present in BOTH the 16GB and 32GB models – it’s just that at 16GB level, the 16GB RDIMMs 2-rank with 4Gbit DRAM dies are 2-rank and thus there is no advantage to buying a 16GB LRDIMM, while that advantage does not exist for 32GB RDIMM which are going to be 4-rank until 8GB DRAM dies emerge many years later).
—-
–
–
–
Inphi suggesting cloud computing will become 57% of market by 2015:
quote:
—-
at the 9:45 minute mark ..
There are some interesting facts .. uh .. in in the data networking market .. and .. uh .. these come out of some of the Cisco Global Cloud Index information.
But if you look within a data center, the .. first of all the graph on the far right .. uh .. shows that cloud computing will become at much as 57% of the market by 2015.
So all of the growth in data centers is going to happen in cloud computing oriented systems over the next 4 or 5 years.
—-
–
–
–
–
–
Inphi suggesting DDR4 will be shipping into Intel’s “Haswell” and happen around 2014-2015:
quote:
—-
at the 12:25 minute mark ..
So the basic story for the company is growth is fed in 2012 and 2013 by the advent of LRDIMM in the server market.
In 2014 and 2015 it will be fed by .. uh .. 100Gig phi and CDR shipping into the communications market and so ..
Uh so .. that’s happening at the same time as DDR4 will be shipping into Haswell.
So 2014 will be a very big year for us overall.
—-
–
–
–
–
Here analyst asks Inphi again about the 16GB – Inphi answers that not all memory module makers CAN make a 16GB RDIMM (because they may not have access to 4Gbit DRAM die) – so by that account they will need LRDIMM (supposedly) – yet Inphi itself is excluding 16GB from possible entry points for LRDIMM (they don’t explain because it can’t compete as HP/Samsung have said – see above) – so where does Inphi suppose the 16GB memory module makers lacking 4Gbit DRAM dies are going to look – NLST IP ?:
quote:
—-
at the 14:20 minute mark ..
Tore Svanberg – Stifel Nicolaus (Analyst):
Very good, thank you very much.
So I guess the big question is this you know crossover or penetration.
And you said 32GB is going to be all LRDIMM.
Now what what about 16GB ?
Uh .. is is it going to be 50-50 ?
–
–
–
–
John Edmunds – CFO:
So .. the .. crossover really is a function of .. uh .. the advent of 4Gbit DRAM .. uh .. and so some memory companies have DRAM that they can offer and some some end-users can solve the memory capacity issue just by moving from 2Gbit to 4Gbit memory.
–
–
–
at the 14:50 minute mark ..
Uh .. in other cases .. uh .. some memory companies don’t have 4Gbit DRAM, so they really like LRDIMM because it gives them the opportunity to offer a competing product.
We can do .. uh .. 3 DIMMs per .. uh .. per channel (3 DPC) .. uh .. at with 16GB (memory) .. uh .. cards .. uh .. at 1333MHz – our products fully capable of doing that stuff – nothing wrong no limitation in our product (speaking really fast) .. in some environments .. uh .. the .. either the memory company or the .. some combination of the memory and the system company .. umm .. have a limitation where they don’t have as much technical margin as they would like to have to take that into production.
–
–
at the 15:30 minute mark ..
So some OEMs will be able to do that with 16GB (memory cards), and other other OEMs will feel like they don’t have enough margin.
And the margin’s really accomplished by tuning the overall system – so you can tweak the BIOS .. uh .. you can .. uh .. do some things with the memory .. uh .. substrate itself, the card that it goes on (i.e. PCB), and you can do some things with the motherboard sometimes that’ll allow you to increase the .. a little bit of the technical margin that you are looking for .. uh .. to feel comfortable you could take that into production.
–
–
at the 15:55 minute mark ..
That .. we think will all be addressed as we move into Ivy Bridge (the next-generation after Romley) in any event.
But some people will be able to do it ahead of Ivy Bridge in Romley – it’s just depends on the OEM.
—-
–
–
–
–
Inphi being uncertain about the demand for LRDIMM – perhaps correctly suggesting that OEMs have little interest in pushing memory that would obviate need to buy more servers (if just adding memory will allow to create more virtual machines on same servers etc.):
quote:
—-
at the 16:10 minute mark ..
Tore Svanberg – Stifel Nicolaus (Analyst):
And, you know, you showed a chart, I think it was maybe an IVC (?) chart looking at you know how 32GB and then eventually 64GB will ramp.
Umm .. but when you talk to your biggest customers, let’s say, you know Micron and Samsung, I mean how how are they looking at that type of ramp.
Uh .. is it very similar .. uh .. and you know what types of penetrations are they talking about both this year and next year ?
–
–
–
–
John Edmunds – CFO:
Umm .. so .. it’s difficult in the supply chain to get a lot of .. uh .. uh .. forecast.
Uh .. because essentially 32GB it’s a new product altogether.
So in in general people don’t know how demand there will be for that.
So we actually think it’ll become .. uh .. uh .. it’ll become demand .. it will be pull driven in effect, because you will have CIOs who are saying I’m going to order a batch of Romley systems, but I want them to be the LRDIMM configured, because I’ve tried that .. I can see that really benefits my application.
Uh .. and .. because .. for two reasons, the new product and because .. uh .. the the system and memory guys don’t know how many customers (are) out there – they are going to call for that.
They are conservative right now on what their thinking will will actually be the demand or what will be the shipment for ..
–
–
–
at the 17:25 minute mark ..
As I showed you earlier on that one example, there is also less hardware to ship if you are shipping that kind of configuration.
So I’m not sure anybody’s out there banging the bush sayin’ “hey do this and buy less hardware” right ? It’s a little bit of an anomaly in that sense.
—-
–
–
–
Inphi pushing out LRDIMM adoption to post-Romley era i.e. “Ivy Bridge”:
quote:
—-
at the 17:35 minute mark ..
But I think once the .. uh .. once the Romley systems get out and people are able to verify that you can HAVE 50% more virtual machines or you could have, you know, better throughput, why wouldn’t they go with the LRDIMM.
They are going to run in that direction.
And .. uh .. we think that’s all good .. uh .. we think as Ivy Bridge comes in you’ll see more .. uh .. applications of LRDIMM ..
—-
–
–
–
–
Inphi going on a limb and comparing their LRDIMM (centralized 628-pin buffer) with DDR4’s decentralized buffer setup (something which looks more like NLST IP – as suggested by NLST and also by this article: “Netlist puffs HyperCloud DDR3 memory to 32GB – DDR4 spec copies homework”):
quote:
—-
The good news is when we get to DDR4 people are more interested in gravitating towards the LRDIMM .. uh .. architecture, where people can choose to buy a separate register chip and as many buffers that the would like.
And .. uh .. it allows for a more ubiquitous .. uh .. implementation of of the same .. uh .. architecture as LRDIMM – you just do away with 2 independent products and once product can scale into .. into what anybody might need.
–
–
(NOTE: Inphi failing to men