Google to Upgrade its Memory? Assigned Startup MetaRAM’s Memory Chip Patents

In August, the Official Google Blog announced an upgrade to Google’s infrastructure code-named Caffeine, aimed at making the search engine faster, and Google opened the system up for testing to people who might want to provide feedback. An interview with one of the developers behind the upgrade described it as an upgrade to the Google File System.

While looking around the US Patent Office assignment database this morning, I noticed a number of new patents and patent applications assigned to Google on November 18, 2009, originally granted to startup MetaRAM.

If a search engine wanted to seriously upgrade its capabilities, it might do more than upgrade its software. It might also upgrade the hardware that it uses. The technology detailed in the MetaRAM patents could potentially transform Google’s computing capacity dramatically.

I have no idea if Google’s Caffeine upgrade also includes a memory upgrade at this point, but I suspect that the MetaRAM patent assignments may not be related.

A post at the Wall Street Journal’s blog from July told us about the demise of MetaRAM, a startup with some very high profile founders, board members, and exectives, including a CEO who was chief technology officer of Advanced Micro Devices Inc for ten years, and a Board of Directors’ member who was a former chief scientist of Sun Microsystems Inc. – Turning Out The Lights: Semiconductor Company MetaRAM

An interview with original and former CEO of MetaRAM from May of 2008, provides a lot of insight into the direction that the company was taking – Pioneering Change in the Memory Market: MetaRam Visionary Fred Weber.

Did Google acquire MetaRAM, or just the patent filings from the company? I’m not sure at this point. The WSJ blog post tells us that the company was shutting down without providing a date for its closing, but the LinkedIn profiles that I could find for people from MetaRAM still list their positions with the company as their present place of employment.

I haven’t been able to locate much in the way of recent news about MetaRam, nor much that associates them with Google.

Will Google keep this technology inhouse, and use it to reduce the costs of servers and workstations by a significant amount while increasing the amount of memory available to those systems, or will they license or sell the technology directly, or both? So little is known at this point, and there hasn’t been an announcement from Google or anyone from MetaRAM yet that I could find. I haven’t seen any rumors of the transaction behind the assignment of the patent filings anywhere on the Web either.

I’ve listed the granted patents and the patent applications separately below. There are 49 of them in total, though a number of them have been filed more than once for one reason or another.

Granted Patents:

Integrated memory core and memory interface circuit (7,515,453)

Abstract

A memory device comprises a first and second integrated circuit dies. The first integrated circuit die comprises a memory core as well as a first interface circuit. The first interface circuit permits full access to the memory cells (e.g., reading, writing, activating, pre-charging and refreshing operations to the memory cells). The second integrated circuit die comprises a second interface that interfaces the memory core, via the first interface circuit, an external bus, such as a synchronous interface to an external bus. A technique combines memory core integrated circuit dies with interface integrated circuit dies to configure a memory device. A speed test on the memory core integrated circuit dies is conducted, and the interface integrated circuit die is electrically coupled to the memory core integrated circuit die based on the speed of the memory core integrated circuit die.

Interface circuit system and method for autonomously performing power management operations in conjunction with a plurality of memory circuits (7,392,338)

Abstract

A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for autonomously performing a power management operation in association with at least a portion of the memory circuits.

Interface circuit system and method for performing power management operations in conjunction with only a portion of a memory circuit (7,386,656)

Abstract

A memory circuit power management system and method are provided. An interface circuit is in communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to perform a power management operation in association with only a portion of the memory circuits.

Interface circuit system and method for performing power saving operations during a command-related latency (7,581,127)

Abstract

A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for performing a power management operation in association with at least a portion of the memory circuits. Such power management operation is performed during a latency associated with one or more commands directed to at least a portion of the memory circuits.

Methods and apparatus of stacking DRAMs (7,379,316)
Methods and apparatus of stacking DRAMs (7,599,205)

Abstract

Large capacity memory systems are constructed using stacked memory integrated circuits or chips. The stacked memory chips are constructed in such a way that eliminates problems such as signal integrity while still meeting current and future memory standards.

Power saving system and method for use with a plurality of memory circuits (7,580,312)

Abstract

A power saving system and method are provided. In use, at least one of a plurality of memory circuits is identified that is not currently being accessed. In response to the identification of the at least one memory circuit, a power saving operation is initiated in association with the at least one memory circuit.

System and method for simulating an aspect of a memory circuit (7,609,567)

Abstract

A system and method are provided for simulating an aspect of a memory circuit. Included is an interface circuit that is in communication with a plurality of memory circuits and a system. Such interface circuit is operable to interface the memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. In accordance with various embodiments, such aspect may include a signal, a capacity, a timing, and/or a logical interface.

System and method for power management in memory systems (7,590,796)

Abstract

A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of physical memory circuits and a system. The interface circuit is operable to interface the physical memory circuits and the system for simulating at least one virtual memory circuit with a first power behavior that is different from a second power behavior of the physical memory circuits.

Pending Patent Applications

Some of the patent applications below share names with granted patents above, and may contain the same or very similar content. There are also some pending patent applications with the same name and abstracts, and those have been grouped together below.

Apparatus and Method for Power Management of Memory Circuits by a System or Component Thereof (20080082763)

Abstract

An apparatus and method are provided for communicating with a plurality of physical memory circuits. In use, at least one virtual memory circuit is simulated where at least one aspect (e.g. power-related aspect, etc.) of such virtual memory circuit(s) is different from at least one aspect of at least one of the physical memory circuits. Further, in various embodiments, such simulation may be carried out by a system (or component thereof), an interface circuit, etc.

Combined Signal Delay and Power Saving System and Method for Use with a Plurality of Memory Circuits (20080123459)

Abstract

A system and method are provided. In use, at least one of a plurality of memory circuits is identified. In association with the at least one memory circuit, a power saving operation is performed and the communication of a signal thereto is delayed

Emulation of Abstracted DIMMs using Abstracted DRAMs (20090216939)

Abstract

One embodiment of the present invention sets forth an abstracted memory subsystem comprising abstracted memories, which each may be configured to present memory-related characteristics onto a memory system interface.

The characteristics can be presented on the memory system interface via logic signals or protocol exchanges, and the characteristics may include any one or more of, an address space, a protocol, a memory type, a power management rule, a number of pipeline stages, a number of banks, a mapping to physical banks, a number of ranks, a timing characteristic, an address decoding option, a bus turnaround time parameter, an additional signal assertion, a sub-rank, a number of planes, or other memory-related characteristics. Some embodiments include an intelligent register device and/or, an intelligent buffer device.

One advantage of the disclosed subsystem is that memory performance may be optimized regardless of the specific protocols used by the underlying memory hardware devices.

Interface Circuit System and Method for Performing Power Management Operations in Conjunction with Only a Portion of a Memory Circuit (20080239857)

Abstract

A memory circuit power management system and method are provided. An interface circuit is in communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to perform a power management operation in association with only a portion of the memory circuits

Interface Circuit System and Method for Autonomously Performing Power Management Operations in Conjunction with a Plurality of Memory Circuits (20080239858)

Abstract

A memory circuit power management system and method are provided. In use, an interface circuit is in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for autonomously performing a power management operation in association with at least a portion of the memory circuits.

Memory Circuit Simulation System and Method with Power Saving Capabilities (20080027697)

Abstract

A system and method are provided including a component in communication with a plurality of memory circuits and a system. The component is operable to interface the memory circuits an the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. The component is further operable to perform a power saving operation.

Memory Circuit Simulation System and Method with Refresh Capabilities (20080027703)

Abstract

A system and method are provided including an interface circuit in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the plurality of memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. The interface circuit is further operable to control refreshing of the plurality of memory circuits.

Memory Circuit System and Method (20090024789)
Memory Circuit System and Method (20090024790)

Abstract

A memory circuit system and method are provided in the context of various embodiments. In one embodiment, an interface circuit remains in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for performing various functionality (e.g. power management, simulation/emulation, etc.).

Memory Device with Emulated Characteristics (20080056014)
Memory Device with Emulated Characteristics (20080126687)
Memory Device with Emulated Characteristics (20080103753)
Memory Device with Emulated Characteristics (20080126692)
Memory Device with Emulated Characteristics (20080126689)
Memory Device with Emulated Characteristics (20080109206)
Memory Device with Emulated Characteristics (20080126688)
Memory Device with Emulated Characteristics (20080104314)

Abstract

A memory subsystem is provided including an interface circuit adapted for communication with a system and a majority of address or control signals of a first number of memory circuits. The interface circuit includes emulation logic for emulating at least one memory circuit of a second number.

Memory module with memory stack and interface with enhanced capabilities (20070195613)
Memory module with memory stack (20080126690)

Abstract

A memory module, which includes at least one memory stack, comprises a plurality of DRAM integrated circuits and an interface circuit. The interface circuit interfaces the memory stack to a host system so as to operate the memory stack as a single DRAM integrated circuit.

In other embodiments, a memory module includes at least one memory stack and a buffer integrated circuit. The buffer integrated circuit, coupled to a host system, interfaces the memory stack to the host system so to operate the memory stack as at least two DRAM integrated circuits. In yet other embodiments, an interface circuit maps virtual addresses from the host system to physical addresses of the DRAM integrated circuits in a linear manner.

In a further embodiment, the interface circuit maps one or more banks of virtual addresses from the host system to a single one of the DRAM integrated circuits. In yet other embodiments, the buffer circuit interfaces the memory stack to the host system for transforming one or more physical parameters between the DRAM integrated circuits and the host system.

In still other embodiments, the buffer circuit interfaces the memory stack to the host system for configuring one or more of the DRAM integrated circuits in the memory stack. Neither the patentee nor the USPTO intends for details set forth in the abstract to constitute limitations to claims not explicitly reciting those details.

Memory Refresh System and Method (20080025122)

Abstract

A system and method are provided. In response to the receipt of a refresh control signal, a plurality of refresh control signals is sent to the memory circuits at different times.

Memory Systems and Memory Modules (20080010435)

Abstract

One embodiment of the present invention sets forth a memory module that includes at least one memory chip, and an intelligent chip coupled to the at least one memory chip and a memory controller, where the intelligent chip is configured to implement at least a part of a RAS feature. The disclosed architecture allows one or more RAS features to be implemented locally to the memory module using one or more intelligent register chips, one or more intelligent buffer chips, or some combination thereof. Such an approach not only increases the effectiveness of certain RAS features that were available in prior art systems, but also enables the implementation of certain RAS features that were not available in prior art systems.

Method and Apparatus for Refresh Management of Memory Modules (20080028136)
Method and apparatus for refresh management of memory modules (20080109598)
Method and Apparatus For Refresh Management of Memory Modules (20080028137)
Method and Apparatus For Refresh Management of Memory Modules (20080109597)

Abstract

One embodiment sets forth an interface circuit configured to manage refresh command sequences that includes a system interface adapted to receive a refresh command from a memory controller, clock frequency detection circuitry configured to determine the timing for issuing staggered refresh commands to two or more memory devices coupled to the interface circuit based on the refresh command received from the memory controller, and at least two refresh command sequence outputs configured to generate the staggered refresh commands for the two or more memory devices

Methods and apparatus of stacking DRAMs (20070058471)

Abstract

Large capacity memory systems are constructed using stacked memory integrated circuits or chips. The stacked memory chips are constructed in such a way that eliminates problems such as signal integrity while still meeting current and future memory standards.

Method and circuit for configuring memory core integrated circuit dies with memory interface integrated circuit dies (20070014168)

Abstract

A memory device comprises a first and second integrated circuit dies. The first integrated circuit die comprises a memory core as well as a first interface circuit. The first interface circuit permits full access to the memory cells (e.g., reading, writing, activating, pre-charging and refreshing operations to the memory cells). The second integrated circuit die comprises a second interface that interfaces the memory core, via the first interface circuit, an external bus, such as a synchronous interface to an external bus. A technique combines memory core integrated circuit dies with interface integrated circuit dies to configure a memory device. A speed test on the memory core integrated circuit dies is conducted, and the interface integrated circuit die is electrically coupled to the memory core integrated circuit die based on the speed of the memory core integrated circuit die.

Multiple-Component Memory Interface System and Method (20080028135)

Abstract

A system and method are provided, wherein a first component and a second component are operable to interface a plurality of memory circuits and a system.

System and Method for Adjusting the Timing of Signals Associated with a Memory System (20080115006)

Abstract

A system and method are provided for adjusting the timing of signals associated with a memory system. A memory controller is provided. Additionally, at least one memory module is provided. Further, at least one interface circuit is provided, the interface circuit capable of adjusting timing of signals associated with one or more of the memory controller and the at least one memory module.

System and Method for Delaying a Signal Communicated from a System to at Least One of a Plurality of Memory Circuits (20080025108)

Abstract

A system and method are provided for delaying a signal communicated from a system to a plurality of memory circuits. Included is a component in communication with a plurality of memory circuits and a system. Such component is operable to receive a signal from the system and communicate the signal to at least one of the memory circuits after a delay. In other embodiments, the component is operable to receive a signal from at least one of the memory circuits and communicate the signal to the system after a delay.

System and Method for Increasing Capacity, Performance, and Flexibility of Flash Storage (20080086588)

Abstract

In one embodiment, an interface circuit is configured to couple to one or more flash memory devices and is further configured to couple to a host system. The interface circuit is configured to present at least one virtual flash memory device to the host system, wherein the interface circuit is configured to implement the virtual flash memory device using the one or more flash memory devices to which the interface circuit is coupled.

System and Method for Reducing Command Scheduling Constraints of Memory Circuits (20080109595)
System and Method for Reducing Command Scheduling Constraints of Memory Circuits (20070204075)
System and Method for Reducing Command Scheduling Constraints of Memory Circuits (20080120443)

Abstract

A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to interface the memory circuits and the system for reducing command scheduling constraints of the memory circuits.

System and Method for Simulating a Different Number of Memory Circuits (20080027702)

Abstract

A system and method are provided for simulating a different number of memory circuits. Included is an interface circuit in communication with a first number of memory circuits and a system. Such interface circuit is operable to interface the memory circuits and the system for simulating at least one memory circuit of a second number. Further, the interface circuit interfaces a majority of address or control signals of the memory circuits.

System and Method for Simulating an Aspect of a Memory Circuit (20090285031)

Abstract

A system and method are provided for simulating an aspect of a memory circuit. Included is an interface circuit that is in communication with a plurality of memory circuits and a system. Such interface circuit is operable to interface the memory circuits and the system for simulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. In accordance with various embodiments, such aspect may include a signal, a capacity, a timing, and/or a logical interface.

System and Method for Simulating an Aspect of a Memory Circuit (20080062773)
System and Method for Simulating an Aspect of a Memory Circuit (20080133825)

Abstract

A memory subsystem is provided including an interface circuit adapted for coupling with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for emulating at least one memory circuit with at least one aspect that is different from at least one aspect of at least one of the plurality of memory circuits. Such aspect includes a signal, a capacity, a timing, and/or a logical interface.

System and Method for Storing at Least a Portion of Information Received in Association with a First Operation for Use in Performing a Second Operation (20080025136)

Abstract

A system and method are provided for use in the context of a plurality of memory circuits. In use, first information is received in association with a first operation to be performed on at least one of the memory circuits. At least a portion of the first information is stored. Still yet, second information is received in association with a second operation to be performed on at least one of the plurality of memory circuits. To this end, the second operation may be performed utilizing the stored portion of the first information in addition to the second information.

System and Method for Translating an Address Associated with a Command Communicated between a System and Memory Circuits (20070192563)

Abstract

A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to translate an address associated with a command communicated between the system and the memory circuits.

Share

348 thoughts on “Google to Upgrade its Memory? Assigned Startup MetaRAM’s Memory Chip Patents”

  1. It does logically follow that Google would be looking at hardware methods to increase performance as well. Particularly ways of saving resources (equipment using less power etc) to shave a bit off their operating costs.

  2. Hi James,

    It does make sense, doesn’t it. The acquisition of MetaRam’s patent filings is a pretty serious move, considering how many patent filings were involved, and the potential impact on Google’s computing power if they were to incorporate the memory boost that the technology can bring to them. I’m still wondering if the people associated with MetaRAM have moved to Google along with the technology.

  3. Just question, frankly I didn’t read all the patent abstracts there is way too much, question is there has been reports saying that caffine is faster than older Google but the truth may be its just their upgrades on the hardwares what you think? Plus could it be that new search engine demands more resources from hardware point of view?

    Thanks

  4. Hi seoyourblog,

    I was overwhelmed by the number of patent filings myself – it took a while to go through all of them, but I wanted to make sure that they were all assigned to Google, and on the same day, and that they all focused upon memory and hardware. I thought it was worth sharing links to all of the filings for anyone who might be interested in seeing more, but I’m not anticipating too many people going through all of those abstracts. :)

    From what I’ve read, the upgrades related to Google’s Caffeine should reduce bottlenecks in the way that information is retrieved from Google’s databases by making changes to how information on different chunk servers are accessed. While there could possibly be some hardware changes involved, it sounds like the primary change involved is in how the Google File System works.

    I don’t think that hardware changes are at the root of the Caffeine upgrade, but it does look like MetaRAM was actually producing chips that could be used with the kind of servers that Google may be using. Adding more hardware based memory may make a significant impact on Google’s servers

  5. It seems odd to me that Google would buy a patent on faster RAM just to use it in their servers – that doesn’t make sense to me. If they wanted to improve the performance of their hardware, you would think that there would be countless ways for them to do that which don’t require a patent on new memory modules. I don’t know what they paid for this, but it seems to me that if they needed more “brute force” in their computing environment that they could easily just buy more servers and get the job done a lot cheaper. I have a very hard time believing that RAM was a significant enough bottleneck in their systems that they needed to go out and acquire a patent on new RAM technology — that just can’t be a cost effective way of increasing the power of your hardware.

    I think this is more likely to be unrelated to Caffeine. I mean how much sense does it make for them to acquire a memory patent just to use it in house and keep somebody else from getting it? It just doesn’t give them a big enough competitive advantage in any of their core markets to warrant acquiring for those purposes, IMO.

    Could this be part of Google’s rumored router? If Google is in fact making it’s own uber-powerful line of routers, owning the patent on state-of-the-art memory modules makes a lot of sense. Very busy routers have huge demands.

  6. My understanding is that Netlist, Inc. has brought patent infringement against first to MetaRAm and then Google for ‘386 patent related to breakthrough memory module exhibited recently at SuperComputing 09. Netlist in 2006 or thereabouts approached Google to discuss under an MOU to provide Google with breakthrough memory modules. Google ultimately declined. Netlist brught patent infringement against MetaRAM, and was in settlement discussion with Google when Google brought a pre-emptive lawsuit for declaratory relief that it is not infringing on Netlist ‘386 patent; and even if it was infringing such patent, Google claims it is moot because such Netlist patent is invalid. See the Complaint and Answer and Counter-Complaint from Netlist which action is in central district court CA.

    Netlist symbol is NLST. For transparency, I own less than 20k shares in NLST, but have no insider information or have any relation to either company. It is advisable to do your own research.

  7. Hi Buzzlord,

    It wasn’t just one patent – the USPTO assignment database shows 49 patent filings assigned from MetaRAM to Google. There’s a possibility that Google may already be using some of this memory technology, though I haven’t seen anything yet that says so explicitly. A Google router sounds interesting – more research for me to do. Thanks. :)

  8. Thank you, Auditor.

    I appreciate your providing some details about this controversy. I’ve started to do some research.

    The patent from Netlist in question appears to be this one:

    Memory module decoder

    Abstract

    A memory module connectable to a computer system includes a printed circuit board, a plurality of memory devices coupled to the printed circuit board, and a logic element coupled to the printed circuit board. The plurality of memory devices has a first number of memory devices. The logic element receives a set of input control signals from the computer system.

    The set of input control signals corresponds to a second number of memory devices smaller than the first number of memory devices. The logic element generates a set of output control signals in response to the set of input control signals. The set of output control signals corresponds to the first number of memory devices.

    InformationWeek wrote about Google’s response to a number of letters from NetList in an article titled Google Launches Pre-Emptive Lawsuit Against Memory Maker. It appears that Google received proposals in 2006 from NetList for server memory, but decided to use another supplier. In May of 2008, NetList’s CEO sent a letter to Google which claimed that the memory Google chose infringed NetList’s patent.

    Netlist’s outside counsel, Morrison & Foerster, sent two follow-up letters in June.

    Google, believing that litigation was imminent, responded by asking the court to issue a declaratory judgment that it is not infringing Netlist’s patent and that Netlist’s patent isn’t valid.

    Google filed a Complaint for Declaratory Judgment against Netlist, Inc., on August 29, 2008, asking a Federal District Court in the Northern District of California, San Jose Division, for a judgment stating that “Google does not infringe any valid and enforceable claim of the ‘386 patent” or alternatively “That the ‘386 patent is invalid.”

    According to a docket that I could find for the case, Netlist filed an answer and a counterclaim on November 18, 2008:

    Google Inc. v. Netlist, Inc.
    U.S. District Court
    California Northern District (Oakland)
    CIVIL DOCKET FOR CASE #: 4:08-cv-04144-SBA
    Assigned to: Hon. Saundra Brown Armstrong
    Referred to: Magistrate Judge Joseph C. Spero
    Cause: 35:145 Patent Infringement
    Date Filed: 08/29/2008
    Jury Demand: Both
    Nature of Suit: 830 Patent
    Jurisdiction: Federal Question

    In the Netlist Form 10-Q filing of November 3, 2009 are these statements about litigation between Google and Netlist, and Netlist and MetaRAM:

    Patent Claims

    In May 2008, the Company initiated discussions with Google, Inc. regarding the Company’s claims that Google has infringed on a US patent assigned to the Company relating generally to “rank multiplication” in memory modules. On August 29, 2008, Google filed a declaratory judgment lawsuit against the Company in United States District Court for the Northern District of California, seeking a declaration that Google did not infringe on the Company’s patent, and that the Company’s patent is invalid. Google is not seeking any monetary damages. On November 18, 2008, the Company filed a counterclaim for infringement of the patent by Google. The Company expects to vigorously pursue its claim against Google and to vigorously defend against Google’s claim of invalidity.

    On March 17, 2009, the Company filed a complaint for patent infringement against MetaRAM, Inc. for its infringement of one of the Company’s patents. On March 26, 2009, MetaRAM filed a complaint against the Company for patent infringement. The parties are currently discussing an amicable settlement of these claims. If these discussions are unsuccessful, the Company expects to vigorously pursue its claim against MetaRAM and to vigorously defend against MetaRAM’s separate claim.

    I did find some more information about the lawsuits between Netlist and MetaRAM, though I don’t know how up to date the following dockets are:

    Netlist Inc. v. MetaRAM Inc.
    U.S. District Court
    District of Delaware (Wilmington)
    CIVIL DOCKET FOR CASE #: 1:09-cv-00165-GMS
    Assigned to: Judge Gregory M. Sleet
    Cause: 35:271 Patent Infringement
    Date Filed: 03/12/2009
    Jury Demand: Both
    Nature of Suit: 830 Patent
    Jurisdiction: Federal Question

    Metaram, Inc. v. Netlist, Inc.
    U.S. District Court
    California Northern District (San Francisco)
    CIVIL DOCKET FOR CASE #: 3:09-cv-01309-VRW
    Assigned to: Hon. Vaughn R. Walker
    Demand: $0
    Cause: 35:271 Patent Infringement
    Date Filed: 03/25/2009
    Jury Demand: Plaintiff
    Nature of Suit: 830 Patent
    Jurisdiction: Federal Question

  9. All very intriguing Bill. I tend to agree with Buzzlord though. I don’t feel all this is Caffeine related. Hadn’t heard about this Google router before. Also intriguing.

  10. I don’t mean to be throwing up old rumors, but the Google router rumor ran rampant last January. I haven’t heard much of anything about it since then though… The rumor was that Google was going to make routers to compete with the Juniper line of products. If Google is acquiring technology like this though – it could mean it is for real.

    The thing that gets me is that when Google was making the Android OS, technology journalists kept asking if they would make a phone. Google’s response was always “We do not make hardware.” If you don’t make hardware at all… what would you need with patents for RAM?

  11. All of you are missing the point. MetaRAM allows to “hang” a lot of GBs off each CPU. Server apps, which is what of interest to Google, need as much GBs as possible. When you run multiple OS’es on top of VMware, e.g., which is also a typical server configuration / application, you can easily gobble up TBs of RAM. Google has been designing its own server HW for at least 7 years. Rumor has it that its HW group is less than stellar. It is only logical that Google bought MetaRAM IP portfolio for dime a dozen. Netlist should shut up and pack up – not to pick a war with Google over this. They are trying to claim priority over what may be similar – if not identical – to JEDEC JC LR-DIMM work. Interestingly, Inphy has not been mention by anyone on this blog. That’s how clueless everyone is.

    My guess is that former employees of MetaRAM have not changed their profiles because many have not landed new jobs yet.

  12. I am agree with Bullaman, I thing it is all to increase the performance to face the upcoming challenges by bing and yahoo.

  13. Netlist (NLST) has patent infringement cases against MetaRAM as well as Inphi.

    Netlist has serious IP in “rank multiplication”, “embedded passives” (for freeing up space on memory modules for memory), heat dissipation that is even (so that there is more tolerance for using lower quality memory chips which are cheaper to use).

    GOOG was using MetaRAM. Memory module makers were using MetaRAM, Intel was supporting it. It was the darling of the industry. Except that it was infringing on Netlist’s IP.

    Then MetaRAM went out of business.

    http://venturebeat.com/2008/08/19/idf-intel-gets-behind-start-up-metarams-server-memory-solution/
    IDF: Intel gets behind start-up MetaRAM’s server memory solution
    August 19, 2008

    Google probably does not want to jeopardize it’s server operations to a “small” (compared to Google’s size) legal issue with Netlist.

    Google and Netlist are in negotatiations to cobble together an agreement. That must have been hard while MetaRAM IP was not under Google’s belt (now that MetaRAM is gone).

    This is probably why Google has had to buy MetaRAM IP, so it can get a better agreement with Netlist.

    Inphi makes components – it makes an “iMB” buffer chip that it wants to sell to memory module makers. Inphi has no IP (intellectual property) in this area. It is probably hoping the memory module makers will deal with infringement issues. However Netlist has filed a suit against Inphi.

    The memory module makers are waiting for JEDEC to put it’s foot down. Until that happens they probably have to wait.

    Meanwhile Netlist is already manufacturing the 16GB HyperCloud memory.

    From Google’s point of view, there is now no competitor to Netlist. Netlist’s IP is strong, and MetaRAM is not even a company anymore. Inphi is making components for memory module makers but holds no IP.

    You can understand this probably makes Google jittery regarding the supply of these new memory modules, since Google is probably a big consumer of high-memory loaded computers. The availability of Netlist’s 16GB HyperCloud memory module allows it to double capacity without having to install additional servers (for memory-bound tasts, such as virtualization/cloud-computing).

    As long as the legal issues do not get resolved, Inphi and memory module makers will be wary of who do partner with to build the memory needed by Google.

    Meanwhile Netlist can provide that memory right now. Using it’s 16GB HyperCloud, Google can install 384GB of memory per server (doubling memory, with would otherwise require adding additional servers).

    Netlist allows doubling of memory, reduction in power consumption (which can be a lot for a heavily memory-loaded machine) and speed improvements. By avoiding adding new servers, you cut power consumption (as well as UPS and generator power requirements for data centers).

    What do you think Google will do ? It probably wants to resolve the memory use issue, so it can continue forward.

  14. Hi Buzzlord,

    I’m still puzzling out why Google decided to invest in 49 patent filings involving memory. Internal uses only? Maybe. Though the patent infringement lawsuits can make one wonder.

  15. Hi humza,

    It’s beginning to sound like a possibility that Google may have been using MetaRAM’s technology for a while. That would make some sense.

  16. Hi netlist_follower,

    Thank you for your insight on this topic – much appreciated. I didn’t check into the Netlist lawsuit against Inphy yet, but that sounds like a good next step.

    Inphy does have a number of patents, and it seems that they are now claiming that the two I listed a couple of comments ago are being infringed upon by Netlist in their modules, including the 16 GB HyperCloud memory.

    Interesting speculation as well on Google’s acquisition of MetaRAM’s intellectual property. I imagine that Google will be happy to resolve these issues. Now that they own all of those patent filings, I’m wondering what their next steps might be, other than pursuing their declaratory injunction and defending the suite brought by Netlist.

  17. Hi InTheKnow,

    Thanks for a very interesting comment.

    It did seem that Google would be interested in using MetaRAM’s technology for their own hardware.

    Interestingly, Inphy filed a patent infringement suite earlier today against Netlist. The press release they issued included some specifics:

    Inphi’s lawsuit alleges that Netlist’s DDR3 Registered memory modules, including their recently announced HyperCloud™, infringe on the following Inphi U.S. Patents: No: 7,307,863 and 7,479,799. The patents relate to memory interface technologies used in enterprise server and storage applications.

    I looked up the patents:

    Programmable strength output buffer for RDIMM address register (7,307,863)

    Abstract

    A programmable strength output buffer intended for use within the address register of a memory module such as a registered DIMM (RDIMM). The output signals of an array of such buffers drive respective output lines that are connected to the address or control pins of several RAM chips. The programmable buffers vary the strength of at least some of the output signals in response to a configuration control signal, such that the output signals can be optimized for the loads to which they will be connected.

    Output buffer with switchable output impedance (7,479,799)

    Abstract

    An output buffer with a switchable output impedance designed for driving a terminated signal line. The buffer includes a drive circuit, and a means for switching the output impedance of the drive circuit between a first, relatively low output impedance when the output buffer is operated in a `normal` mode, and a second output impedance which is greater than the first output impedance when operated in a `standby` mode. By increasing the drive circuit’s output impedance while in `standby` mode, power dissipation due to the termination resistor is reduced. When used in a memory system, additional power savings may be realized by arranging the buffer such that the increased impedance in `standby` mode shifts the signal line voltage so as to avoid the voltage range over which a line receiver’s power consumption is greatest.

    Things seem to be heating up.

  18. I have a very hard time believing that RAM was a significant enough bottleneck in their systems that they needed to go out and acquire a patent on new RAM technology — that just can’t be a cost effective way of increasing the power of your hardware.

  19. I am going to go with the general opinion here that it does seem unlikely but we have confirmation that they are doing this and there will be a reason behind it.

    I am far from an expert on patents and dont really know how the work and what level of protection they offer but surely there has to be some visibility on these things to challenge them prior to becoming law? I know from watching Dragons den that what offers protection in one country does not always work in another.

    Googles veil of mist and secrecy is half the fascination, part of me (the cynic) always thinks PR….

  20. Hi Bill,

    Update on Netlist v Google litigation. After a hotly contested hearing on 11/12/09, the Hon. Armstrong issued an order dated 11/16/09 in favor of Netlist’s ‘386 patent claim construction. On 11/18/09 or so, Google changed the attorney.

    On 11/24/09 in Netlist v MetaRAM joint case mgmt statement, MetaRAM disclosed additional comment that it “ceased operations, and prior to then sold only approximately $37,000 worth of DDR3 memory controllers subject to lawsuit. None of those memory controllers were used by MetaRAM’s customers in commercial sales, and instead all were destroyed.” In the following sentence, MetaRAM referenced Google v Netlist as related case. Actions speak louder than words. A reasonable inference is that MetaRAM has taken drastic action to reduce and limit any potential liability from alleged patent infringement. Can you guess the identity of the MetaRAM’s customer, and why $37,000 worth of non-commercial DDR3 memory controllers were destroyed?

    In re Netlist v Inphi, my understanding is that Netlist’s IP portfolios are continuation of its earlier patents. In fact, Netlist received more patent(s) in November 2009.

    IMHO, Netlist is a logical acquisition target for Google, CISCO, Intel or even Microsoft in 2Q/3Q of 2010. Then again, you never know about tech stocks.

  21. I wonder if the Nov 12, 2009 court order has anything to do with NLST stock price rise starting Nov 11-12.

  22. Evidently as you load memory, the allowable bandwidth tends to go downhill. NLST’s HyperCloud memory module (demoed at Supercomputer Expo 2009 on HP ProLiant servers) doubles max memory to 384GB, while retaining ability to operate at max speed.

    http://www.prnewswire.com/news-releases/netlist-demonstrates-new-hypercloud-memory-modules-at-supercomputing-09-70174702.html
    Netlist Demonstrates New HyperCloud Memory Modules at Supercomputing 09
    Showcases interoperability between standard JEDEC server memory solutions and HyperCloud modules

    NLST HyperCloud presentation:
    http://www.scribd.com/doc/22814075/Hyper-Cloud-Press-Presentation-11-4-09

    It is also lower power (for maxed memory could be significant), but more significantly in applications which were previously memory-bound (as GOOG’s might be or in virtualization where you need to keep “virtual images” in memory) it halves the need for servers (which means you halve the need for power and REDUNDANT power i.e. UPSs, and generators).

    http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
    Netlist goes virtual and dense with server memory
    So much for that Cisco UCS memory advantage
    By Timothy Prickett Morgan
    Posted in HPC, 11th November 2009 18:01 GMT

    The one cost that Duran did not calculate was savings in power and cooling, but the HyperCloud memory burns under 10 watts for a 16GB module, and in general, for a given capacity, a HyperCloud module will burn 2 to 3 watts less than a standard DDR3 module. And because HyperCloud memory can run at the full 1.33GHz speed, regardless of the capacity in the box, there should be a sizeable performance boost on applications that are sensitive to memory bandwidth – maybe as high as 50 per cent, says Duran.

    NLST had shown this technology to GOOG some time back. Later GOOG had chosen to go with what now seems like MetaRAM (though as poster above suggests MetaRAM – now GOOG – claims not much was used ?).

    NLST sent notice to GOOG about use of infringing technology. GOOG in turn sued to be left alone.

    GOOG’s acquisition of MetaRAM’s IP is thus in line with it consolidating it’s defence against NLST suit (wouldn’t want to have MetaRAM in bankruptcy slacking off in defence since it would impact GOOG as a user of MetaRAM).

    With the demise of MetaRAM, the only infringers left are Inphi, a component manufacturer. It probably seeks to offload infringing issues to memory makers. However memory makers are waiting for JEDEC to decide what to do with NLST in control of much of the IP in “rank multiplication”.

    Earlier the memory module makers were allying with MetaRAM to supply them the technology. With MetaRAM no more, they will be looking to Inphi to provide them the technology. However Inphi has no IP in this area (the retaliatory lawsuit brought against NLST mentions some unrelated patents that Inphi holds).

    http://www.inphi.com/news-events/press-releases-and-media-alerts/inphi-announces-availability-of-industrys-first-memory-buffer-based-on-its-isolation-memory-buffer-technology.php
    Inphi announces availability of industry’s first memory buffer based on its isolation memory buffer technology
    Inphi component enables servers and workstations to handle greater volumes of data and support more memory modules

    Since first unveiling details of its iMB technology in June 2009, Inphi has worked closely with the JEDEC standards body and with the entire server technology ecosystem to make the benefits of the iMB technology available in standardized form. Of the multiple approaches to memory buffer (MB) technology, JEDEC has chosen Inphi’s single-chip configuration as the basis for the standard, which is expected to be finalized in the first quarter of 2010. At that time, Inphi plans to have JEDEC-compliant iMB parts available.

    http://www.inphi.com/news-events/inphi-in-the-news/2009/enterprise-memory-for-energystar-systems.php
    Enterprise Memory for EnergyStar Systems
    Thursday, 12 November 2009

    With the release of the energy star rating for compute servers, there has been a number of approaches to meet the requirement and increase density & performance. Standards communities such as JEDEC, the 40G/100G networking associations are currently finalizing adoption of the the iMB (Isolation Memory Buffer) technology for new high performance memories. The use of this technology creates a new class of memory module called the LRDIMM (Load Reduced Dual Inline Memory Module). This technology was developed by Inphi, a ten year old high speed analog semiconductor company from Sunnyvale CA.

    The LRDIMMs are being built by Hynix Samsung, Micron, Naya and others using the Inphi chips in Q1 ‘10. The products are currently only for the Enterprise class EnergyStar applications as the Inphi chip that is being used costs about $25US in 100k qty. The advantage of the technology is the power reduction and density/performance improvement while still maintaining the 10-12 BER and supporting a single chip/ single cycle load for both command and address signals.

    By acquiring MetaRAM’s IP, GOOG along with NLST now hold the keys to the new JEDEC LRDIMM standard which is being hashed out.

    http://www.simmtester.com/PAGE/news/showpubnews.asp?num=167
    What is LR-DIMM , LRDIMM Memory ? ( Load-Reduce DIMM)
    Tuesday, October 13, 2009

    CSCO’s UCS strategy does the same thing, except it seeks to put an ASIC on the motherboard and sell it’s servers.

    In contrast NLST’s solution is to put the ASIC on the memory module itself. Thereby allowing it to be used as a regular memory module.

    Here is an article which explains the difference:

    http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
    Netlist goes virtual and dense with server memory
    So much for that Cisco UCS memory advantage
    By Timothy Prickett Morgan
    Posted in HPC, 11th November 2009 18:01 GMT

    http://www.ideationcloud.com/2009/11/12/after-metaram-enter-netlist-hypercloud-and-more-memory-for-all-types-of-servers/
    After MetaRAM, enter Netlist: HyperCloud and more memory for all types of servers
    By Tarry Singh at 12 November, 2009, 2:03 am

    I don’t know what MetaRAM was expecting to do earlier – either it was asserting IP like NLST is with JEDEC, or MetaRAM was acting as supplier of chips (like Inphi – though Inphi holds little IP in this area).

    MetaRAM holds IP in the area, but on stuff like stacked DRAMs etc. which NLST has said is inefficient for even heat dissipation etc.

    Here is some more info on the limits to memory capacity:

    http://searchdatacenter.techtarget.com/news/1358898/Ciscos-hopes-its-Extend-Memory-technology-will-boost-UCS
    Cisco’s hopes its Extend Memory technology will boost UCS
    By Bridget Botelho, News Writer
    10 Jun 2009

    Adding capacity
    In traditional systems, the CPU memory controller can use only a certain number of DIMMs and, thus, seeks out that number. The latest Intel Xeon 5500, for instance, can address up to 12 or 18 DIMMs, though it poses a performance tradeoff for the latter, David Lawler, Cisco’s vice president of platform products, told SearchDataCenter.com.

    Cisco’s Extend Memory technology makes a CPU see one DIMM as four separate DIMMs, giving a single 12-DIMM server the memory capacity of 48 DIMMs and up to 384 GB of memory.

    To address this issue, Cisco engineers placed a high-performance chip on the memory bus between the processor and the DIMM that changes the way the CPU searches for DIMMs, Lawler explained. “So when the CPU searches for an 8 GB DIMM, we can represent that as four 2 GB DIMMs instead,” he said.

    Whether a CPU accesses an 8 GB DIMM or four 2 GB DIMMs makes no difference from a capacity standpoint, but using more, smaller DIMMS versus fewer, larger DIMMs is cheaper. An 8 GB DIMM is significantly more expensive than buying four 2 GB DIMMs because the cost of memory increases exponentially with density. A 2 GB DIMM, for example, costs around $125, but an 8 GB DIMM can cost more than $1,000.

    Servers hosting virtual machines with large databases can run out of memory far before they run out of CPU power, so having that extra memory capacity to work with could be a strong selling point for Cisco.

    “On a normal host, memory usually is the first resource we’re short on. When the CPU of a fully loaded ESX box is still at 60%, we often run into memory shortage,” said virtualization expert Gabrie van Zanten.

    On the Ars OpenForum IT community site, one Unix administrator and virtualization user listed the “huge memory density with full-size blades” on UCS as one reason he may switch from Dell blade servers to the UCS.

  23. Google probably owns the fastest CPU’s and ram a server could hold but why upgrade? Maybe their trying to fix some server hardware errors, or they are up to something, since they started engineering a google phone why not try their hands on google routers, that would be cool!

  24. Hypothetically speaking if Google is venturing out to manufacture servers to compete against Cisco, it would make sense for Cisco to acquire Netlist and Broadcom. That is assuming that either is up for sale. My understanding is that Netlist insiders hold 51% of its stock. They have the manufacturing plant in China to capitalize on this Hypercloud R&D investment. I will vote for the underdog to knock out the giant. Bigger they are, the harder they fall. I’ve got more DD to conduct to set entry level for Netlist and Broadcom.

  25. Hi Bill,

    Based on my reading of Inphi and Netlist patent history and portfolio, I agree with Netlist legal counsel’s opinon that Inphi’s retaliatory infringement claims have no merit. My research shows the following:

    Netlist’ IP attorneys of record: Knobbe, Martin, Olson & Bear
    7,289,386, patent date: October 30, 2007
    Appl. No.: 11/173,175
    Filed: July 1, 2005

    Inphi’s IP attorneys of record: Koppel, Patrick, Heybl & Dawson
    7,307,863, patent date: Dec. 11, 2007
    Appl. No.: 11/195,910
    Filed: Aug. 2, 2005

    Inphi’s 2nd patent referenced in lawsuit
    7,479,799, patent date: Jan 20, 2009
    Appl. No.: 11/376,593
    Filed: Mar. 14, 2006

    Netlist’s ‘386 patent has also can claim benefit of its prior related patents.
    7,289,386, patent date: October 30, 2007
    Appl. No.: 11/173,175
    Filed: July 1, 2005
    CROSS-REFERENCE TO RELATED APPLICATIONS
    The present application is a continuation-in-part of U.S. patent application Ser. No. 11/075,395, filed Mar. 7, 2005, which claims the benefit of U.S. Provisional Application No. 60/550,668, filed Mar. 5, 2004 and U.S. Provisional Application No. 60/575,595, filed May 28, 2004. The present application also claims the benefit of U.S. Provisional Application No. 60/588,244, filed Jul. 15, 2004, which is incorporated in its entirety by reference herein.

    Slice of Netlist’s patent summary and description is:

    DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
    Various types of memory modules 10 are compatible with embodiments described herein. For example, memory modules 10 having memory capacities of 512-MB, 1-GB, 2-GB, 4-GB, 8-GB, as well as other capacities, are compatible with embodiments described herein. In addition, memory modules 10 having widths of 4 bytes, 8 bytes, 16 bytes, 32 bytes, or 32 bits, 64 bits, 128 bits, 256 bits, as well as other widths (in bytes or in bits), are compatible with embodiments described herein. Furthermore, memory modules 10 compatible with embodiments described herein include, but are not limited to, single in-line memory modules (SIMMs), dual in-line memory modules (DIMMs), small-outline DIMMs (SO-DIMMs), unbuffered DIMMs (UDIMMs), registered DIMMs (RDIMMs), fully-buffered DIMM (FBDIMM), mini-DIMMs, and micro-DIMMs.

    . . .

    Memory Density Multiplication
    In certain embodiments, two memory devices having a memory density are used to simulate a single memory device having twice the memory density, and an additional address signal bit is used to access the additional memory. Similarly, in certain embodiments, two ranks of memory devices having a memory density are used to simulate a single rank of memory devices having twice the memory density, and an additional address signal bit is used to access the additional memory. As used herein, such simulations of memory devices or ranks of memory devices are termed as “memory density multiplication,” and the term “density transition bit” is used to refer to the additional address signal bit which is used to access the additional memory.

    In certain embodiments utilizing memory density multiplication embodiments, the memory module 10 can have various types of memory devices 30 (e.g., DDR1, DDR2, DDR3, and beyond). The logic element 40 of certain such embodiments utilizes implied translation logic equations having variations depending on whether the density transition bit is a row, column, or internal bank address bit. In addition, the translation logic equations of certain embodiments vary depending on the type of memory module 10 (e.g., UDIMM, RDIMM, FBDIMM, etc.). Furthermore, in certain embodiments, the translation logic equations vary depending on whether the implementation multiplies memory devices per rank or multiplies the number of ranks per memory module.

    My understanding is that Google, MetaRAM and Inphi has weak argument why they should not have to pay for “alleged” patent infringement. Equity and corporate ethics favor Netlist.
    Please do your own DD and come to your own conclusion. Happy holidays . . .

  26. Hi Inspirational,

    I’m not sure that Google acquired MetaRAM’s intellectual property solely to increase the power of there technology, but I would say that memory modules like this would be helpful in getting rid of some bottlenecks.

  27. Hi Jimmy,

    The patent process is pretty involved, and does amount in a fair amount of scrutiny. Some patents require a fair amount of knowledge to understanding what they cover – I’m not going to begin to claim that I have enough of that knowledge when it comes to these patents focusing upon memory. :)

  28. Hi Auditor,

    Thanks for the updates. At this point, I’m wondering what kind of memory Google is actually using in their servers.

    Would they build servers in competition against companies like Cisco? Funnier things have happened when it comes to a business developing technology and processes in response to a need, and finding that they have the possibility of a whole new revenue stream. It’s a little tempting to think that if Google were to want to go in that direction that they might start considering Netlist as an acqusition target. I’m not sure if that’s a possibility, or if it is feasible. I still find myself questioning Google’s motivations in acquiring MetaRAM’s IP, especially knowing about ongoing litigation.

  29. Hi netlist,

    There was also a drop in Netlist’s stock when the Inphy countersuit was announced, regardless of Netlist’s statements that the countersuit had no merit. I appreciate your extensive updates. Thank you. I’m going to have to find some time to look through all the references that you pointed towards this weekend.

  30. Hi Mal,

    I’m not sure that it’s safe to presume what Google is running on their servers at this point. One thing is for certain – there are a lot of lawyers involved in the thick of things here. It’s going to be interesting seeing how all of this plays out.

  31. There’s something interesting here.

    1. Google, Netlist then Metaram and Inphi are tangled up in lawsuits
    2. Netlist stock shoots up and folks make money

    Connected? I think Metaram was selling a lot of DDR2 memory to Google and others (for AMD servers) and about to sell DDR3 for Intel servers. There was a lot of press on Metaram and DDR2 in Feb 2008. Netlist threatens Google. Metaram gets hit with a lawsuit by Netlist. Google buys Metaram. Someone believes Netlist will settle and get the Google and other business. Smacks of insider trading. Search for CEO and board members for these companies and you get to Fred Weber (Metaram CEO) and Atiq Raza (board member) who worked together at AMD. Search on Raza and insider trading and you find he settled an insider trading suit. Remember Hector Ruiz at AMD?

  32. Bill and Auditor: I took a look at your postings. It seems there are 5 lawsuits:

    http://www.rfcexpress.com/lawsuit.asp?id=39876

    http://www.rfcexpress.com/lawsuit.asp?id=44941

    http://www.rfcexpress.com/lawsuit.asp?id=45344

    http://www.rfcexpress.com/lawsuit.asp?id=50603

    http://www.rfcexpress.com/lawsuit.asp?id=52389

    You can see all the details via PACER of course, http://www.pacer.gov/

    There’s a lot on Google v. Netlist. Not so much on the others. Your comments on the patent sale are interesting. The one MetaRam patent in the case against Netlist is 7,472,220. This patent is still assigned to MetaRam, but has a terminal disclaimer.

    See the first page http://www.google.com/patents?vid=USPAT7472220

    From http://www.freepatentsonline.com/help/item/Terminal-Disclaimer.html

    “A binding statement made with the Patent Office in a case where more than one patent
    has been obtained by the inventor on the same invention. The disclaimer will state
    that the later patent will expire at the same time as the former patent and the later
    patent will be enforceable only as long as both the patents are commonly owned.”

    According to PAIR there are 3 patents that have to be commonly owned with 7,472,220:

    http://portal.uspto.gov/external/portal/pair

    11/461439 now US 7,580,312 (transferred according to your list)
    11/524812 now US 7,386,656 (transferred according to your list)
    11/584179 now US 7,581,127 (not on your list but assigned to Google according to the USPTO records)

    There are currently 10 patents granted to MetaRam according to the USPTO, you listed 9, the last above is the extra one.

    I will be interested to see how MetaRam and Google handle this with Netlist. If MetaRam doesn’t have rights to enforce this patent any more, they may have to drop their case against Netlist or perhaps Google has to take over. Then interesting things may happen.

    Do either of you have any more information?

  33. Hi IP Agent,

    Thank you very much for your followup on this. When I originally checked at the USPTO assignment database. I did only see 49 granted patents and patent applications, but now I’m seeing 50. US 7,581,127 is on my list above (fourth one down), but I didn’t include 7,472,220, and I’m not sure why.

    There’s a new patent application now listed in both lists as well, 20090290442, which wouldn’t have shown up in either search since it wasn’t published until November 26th, but it was assigned to Google on November 18th as well (Unpublished patent applications aren’t displayed in the assignment database at the USPTO). That explains why there are now 50 showing, instead of 49.

    It’s possible that when I searched in the Assignment Database on MetaRAM, I looked in the “Assignor Name” field instead of the “Assignee Name:” field, which would have meant that I would miss 7,472,220, since it doesn’t appear to have been assigned to Google.

    What’s odd is that the USPTO assignment database now lists 50 granted patents and patent applications as having been assigned to MetaRam, and 50 granted patents and pending patent applications as being assigned by MetaRam, and and they aren’t the same 50. I’m going to have to check on why there is a mismatch.

    The granted patent you’ve pointed out is:

    Interface circuit system and method for performing power management operations utilizing power management signals.

    Interesting. Thank you.

  34. IP Agent and Bill,

    Above referenced MetaRAM patent application disloses Netlist patent work under Bakhta that was filed in 2005. Since Netlist claims Hypercloud is interoperable, can it work for Cisco legacy servers and routers without upgrading to new CPU? Has neutral or OEM eval/review been conducted on Hypercloud? Can Hypercloud be reconfigured for laptop or desktop use?

  35. Interesting patent info.

    auditor:
    NLST claims HyperCloud will work like regular memory. So it should work on desktops. NLST probably isn’t making it for laptops because the form factor maybe different for laptop memory (which may or may not allow for extra circuitry). Also laptop may not be the ideal market for pushing this.

    CSCO’s UCS strategy is essentially neutered by NLST HyperCloud – the difference is CSCO puts the ASIC on the motherboard while NLST puts it on the memory module itself. NLST also uses some other technologies like Planar-X and “embedded passives” to give it more space on the memory module (don’t know much about that).

    NLST HyperCloud – as far as I have understood it – seems to allow greater memory density, greater memory speed and energy efficiency.

    It seems as you load memory (electrical load issues ?) the achievable speed goes down. So on heavily memory-loaded systems you have the memory, but the achievable speed is not giving you the bang for the buck.

    NLST HyperCloud makes the processor think less memory is on board and it runs it at full speed.

    Repeating some references from above:

    http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
    Netlist goes virtual and dense with server memory
    So much for that Cisco UCS memory advantage
    By Timothy Prickett Morgan
    Posted in HPC, 11th November 2009 18:01 GMT

    The one cost that Duran did not calculate was savings in power and cooling, but the HyperCloud memory burns under 10 watts for a 16GB module, and in general, for a given capacity, a HyperCloud module will burn 2 to 3 watts less than a standard DDR3 module. And because HyperCloud memory can run at the full 1.33GHz speed, regardless of the capacity in the box, there should be a sizeable performance boost on applications that are sensitive to memory bandwidth – maybe as high as 50 per cent, says Duran.

    NLST HyperCloud presentation:
    http://www.scribd.com/doc/22814075/Hyper-Cloud-Press-Presentation-11-4-09

  36. Look at page 9 of the above Netlist HyperCloud presentation using Adobe viewer. Title is HyperCloud 2 vRank DDR3 RDIMMs. There is a picture of the Netlist memory module. Zoom in on the bottom of the memory module. Look near the notch in the connector at the bottom. Go to about 500% (times 5) magnification. The Netlist chips “Isolation devices” are different sizes, one is really squashed. The only explanation I can see is that the chip images have been pasted in using PhotoShop. Why would Netlist do that if they had real modules to demonstrate?

  37. quote:
    magnification. The Netlist chips “Isolation devices” are different sizes, one is really squashed. The only explanation I can see is that the chip images have been pasted in using PhotoShop. Why would Netlist do that if they had real modules to demonstrate?

    NLST demonstrated at Supercomputer Expo 2009. Here is what HP VP had to say:

    http://www.prnewswire.com/news-releases/netlist-demonstrates-new-hypercloud-memory-modules-at-supercomputing-09-70174702.html
    Netlist Demonstrates New HyperCloud Memory Modules at Supercomputing 09
    Showcases interoperability between standard JEDEC server memory solutions and HyperCloud modules

    To showcase its 2-vRank HyperCloud modules, Netlist is using industry standard servers, such as the HP ProLiant DL380, demonstrated in the following configurations:

    * 8GB and 16GB 2 vRank DDR3 RDIMM functionality
    * Three 2 vRank modules per channel
    * 1333 Mega Transfers per second (MT/s)
    * Interoperability with standard JEDEC DDR3 modules
    * Interoperability with different RDIMM capacities

    “Customers running memory intensive computing environments, such as virtualization, cloud computing, and HPC applications, are often limited by memory bottlenecks in their servers,” said Mike Gill, vice president, Industry Standard Servers Platform Engineering at HP. “The Netlist technology on HP industry-standard servers increases server memory capacity and bandwidth to enhance application performance in converged infrastructures.”

    Here is the JEDEC standard that Inphi wants to sell chips for, which JEDEC is mulling over, and whose final standard memory module makers are awaiting before they start using Inphi chips. However this is infringing NLST intellectual property. So until JEDEC resolves this, memory module makers and Inphi are stuck.

    http://www.simmtester.com/PAGE/news/showpubnews.asp?num=167
    What is LR-DIMM , LRDIMM Memory ? ( Load-Reduce DIMM)
    Tuesday, October 13, 2009

    Here is an example of MU waiting for buffers:

    http://www.micron.com/products/modules/lrdimm/index
    LRDIMM
    quote:
    But because end quality is dependent on more than just
    reliability, we’re also working closely with buffer suppliers and server
    OEMs to ensure that our LRDIMMs function well with multiple server
    platforms.

    HP has it’s own issues with CSCO’s UCS strategy (which NLST HyperCloud neuters).

    http://www.cnbc.com/id/33865963
    HP’s Shot Across Cisco’s Bow
    Published: Wednesday, 11 Nov 2009 | 5:09 PM ET
    By: Jim Goldman
    CNBC Silicon Valley Bureau Chief

    Earlier this year, Cisco [CSCO] opened a major front against one-time partners Hewlett-Packard [HPQ] and IBM [IBM] in the hotly competitive, and fast-growing server market with its blade, and so-called Unified Computing System initiative. The competition, and headlines it generated, become so intense so quickly that Cisco even posted a blog entitled “Is HP Now a Friend or Foe of Cisco?”

    http://www.reuters.com/article/marketsNews/idCNN1228359720091113?rpc=44
    HP still seen looking for deals after 3Com
    Thu Nov 12, 2009 8:49pm EST
    * Competitive pressures rising in tech with flurry of M&A
    * Analyst say HP could pursue more networking deals
    * Storage, software also seen as attractive to HP
    * Brocade shares plunge 13 pct, unlikely target after 3Com
    By Gabriel Madway

  38. But why is Netlist “faking” pictures in a press presentation made so recently? Doesn’t seem right to me. End.

  39. quote:
    Look at page 9 of the above Netlist HyperCloud presentation using Adobe viewer. Title is HyperCloud 2 vRank DDR3 RDIMMs. There is a picture of the Netlist memory module. Zoom in on the bottom of the memory module. Look near the notch in the connector at the bottom. Go to about 500% (times 5) magnification. The Netlist chips “Isolation devices” are different sizes, one is really squashed. The only explanation I can see is that the chip images have been pasted in using PhotoShop. Why would Netlist do that if they had real modules to demonstrate?

    Yes, you are right. The “buffer”-like chip to the left of the notch does look slightly smaller.

    However note there are already 8 “buffer” chips there (which seems like a canonical figure if the “buffers” are for data lines). The smaller one to the left of the notch is the 9th. It might be an odd one out i.e. used for something else – maybe control signals or something like that.

    After all there is no assertion that the chips are the same.

  40. Hi Roomy Khan,

    Interesting choice of names to use to post with here. I’m not sure who knows what about the inner workings of each of the companies involved, though it does appear to be a mess. I would guess that it would be unavoidable for some of the people involved in these companies not to know each other, or to have worked together, but there is the potential that something unusual might be going on. Just a quick disclaimer on my part – I own no stocks in any of the companies mentioned in this post, and in the comments to the post. :)

  41. Hi auditor,

    The engineering and interoperability of memory modules goes a bit outside the area of my expertise. Thankfully, netlist was able to answer your questions on that topic.

  42. Hi netlist,

    Thanks for answering auditor’s and engineer’s questions. I still have some catchup reading with the links that you’ve posted. (And I’m still wondering why Google purchased MetaRAM’s IP.)

  43. On Dec 4, 09, Netlist brings separate suit against Google for infringing on Netlist USPTO patent 7,619,912, entitled “Memory Module Decoder” issued on Nov 17, 2009 based on patent application in mid-2005. In para 9, Netlist alleges that Google infringed on the ‘912 Patent including its use of the 4-Rank Fully Buffered Dual In-Line Memory Modules (4-Rank FBDIMMs) in its server computers.

    This new Complaint significantly advances Netlist’s claims and rights against Google, because this suit comes after having examined Google’s server after winning discovery ruling from that (Google v Netlist) Court authorizing Netlist to inspect Google server despite Google’s strong objections.

    Netlist Relief for Prayer includes temporary and permanent injunctive relief, and “treble damages” for unlawful practices of Google characterized as “willful and deliberate”.

    This Complaint reads: “The ‘912 Patent is directed to memory modules with a logic element that overcomes computer system limitations that would otherwise constrain the memory module architectures with which the computer system can operate. As a result, the claimed memory modules effectively increase the memory capacity and improve the energy efficiency of the computers in which they reside. Netlist is the owner of the entire right, title, and interest in and to the ‘912 Patent. A true and correct copy of the ‘912 Patent is attached hereto as Exhibit 1.” Reference Case3:09-cv-05718-EMC Filed 12/04/09

  44. Hi Auditor,

    From what little I’ve seen online regarding the injunctive relief requested on that suit, in part it asks for Google to stop using servers that use memory that infringes upon the Netlist patent. I don’t know how many servers Google uses that might include the memory modules in question, but if that is granted, it could possibly be a harsh blow to Google. Have to see if I can find a copy of the complaint.

  45. From NLST’s complaints, and GOOG’s testimony, it seems to suggest GOOG is not just an innocuous buyer of memory from MetaRAM or some such infringer.

    But GOOG seems itself to be a major party involved in issues of memory design (something which other posts here seem to suggest as well – that they had a hardware design group for such things).

    NLST’s complaint includes use by GOOG of 4-rank FBDIMMs and inducing others to sell such stuff (maybe MetaRAM ?).

    Patent in question is:
    http://www.freepatentsonline.com/7619912.pdf

  46. Just some comments on looking through the court dockets in GOOG suit against NLST.

    In the previous GOOG suit against NLST, GOOG and NLST have a settlement conference in August 2010. They will probably have to thrash out an agreement before then.

    In looking through court documents one can see that NLST has got access to a GOOG server (after GOOG protestations).

    The protocol to be followed by NLST is outlined in:
    JOINT INSPECTION PROTOCOL AND [PROPOSED] ORDER

    NLST gets to inspect FBDIMMs – AMB buffer manufacturer, use of “Mode C” and non-Mode C, power consumption, replace with standard FBDIMMs, monitor thermal stuff, take max of 20 photographs (“Attorney’s Eyes Only”). Inspected at GOOG lawyer’s offices (Fish and Richardson).

    GOOG is saying it doesn’t contest that it is using FBDIMMS in Mode C.

    It’s argument perhaps is that the NLST IP is faulty.

    NLST removed Morrison and Foerster and replaced by Pruetz Law Group (which is a small outfit supposedly very good for IP).

    From the docket item – “AMENDED JOINT CASE MANAGEMENT CONFERENCE STATEMENT AND [PROPOSED] ORDER”:

    NLST inventors Jayesh Bhakta and Jeffrey Solomon have testified.

    GOOG employees under spotlight:

    Rick Roy – “involved in the development of the accused 4-rank FBDIMMs and who participated in meetings with Netlist concerning it’s patented technology”

    Andrew Dorsey – same as above

    Rob Sprinkle – same as above

    GOOG’s main argument maybe presented in this docket item:
    [REDACTED] GOOGLE INC.’s RESPONSIVE CLAIM CONSTRUCTION BRIEF

    Earlier Judge Armstrong denied GOOG request to include NLST ‘386 patent “prosecution history” (at the Patent Office I presume):

    As auditor reported above:
    Update on Netlist v Google litigation. After a hotly contested hearing on 11/12/09, the Hon. Armstrong issued an order dated 11/16/09 in favor of Netlist’s ‘386 patent claim construction. On 11/18/09 or so, Google changed the attorney.

    The Nov 12, 2009 order states:

    THIS COURT HOLDS THAT pursuant to Markman v. Westview Instruments, Inc., 52 F.3d
    967, 980 (Fed. Cir. 1995) aff’d, 517 U.S. 370 (1996) and this Court’s Standing Order at Paragraph
    10, because the ‘386 Patent’s prosecution history is not in evidence and not addressed in either
    parties’ claim construction papers, Netlist’s objection is sustained.
    IT IS HEREBY ORDERED THAT Google and its counsel shall not present, refer to,
    comment upon, introduce or use in any way, the ‘386 Patent’s prosecution history in its claim
    construction presentation.

    From docket item # 27:

    GOOG does not dispute it is using “Mode C” in it’s FBDIMMs.

    From docket #27:

    NLST wants to see GOOG servers so it can verify they are using what JEDEC refers to as “Mode C” to make it seem like there are fewer ranks of memory than actually are on the memory module.

    From GOOG’s account, they say in early 2006, GOOG was looking for manufacturers and tester of it’s FBDIMMs. It discussed with various companies (including NLST).

    They signed NDAs to see GOOG’s FBDIMM design.

    “GOOG does not dispute that it’s FBDIMMs operate in Mode C” ..

    Earlier GOOG had said (docket #33 documents) that it may call Desi Rhoden (Exec. VP of Inphi) to explain rank etc.

    That witness maybe less of an impartial “expert” after NLST vs. Inphi.

    The impression one gets from the docket info in GOOG’s suit against NLST, is that GOOG was not just a customer of MetaRAM, but an active developer of memory, and that it was a user of privileged information that NLST gave to GOOG when NLST proposed new memory for GOOG.

    Whether that info leakage had any link back to MetaRAM (through GOOG) is another story.

  47. Another point of view of Netlist vs Google, notes the lobbying by Google asking that changes be made in current patent law. One would wonder if Google understands that it has a weak case against Netlist. Not to say that the efforts of Google to have laws changes are made wholely against Netlist, but companies like Netlist. I would hope for the sake of inovation that small companies like Netlist are not stripped of their ability to bring new ideas to market and be rewarded for their efforts.

  48. The last post was a bit haphazard – just a quick run through the filings.

    In answer to the potential question – “does NLST HyperCloud really work”, I guess we have confirmation from GOOG’s extensive use of “Mode C” in it’s servers.

    This is something which has emerged from discovery in GOOG vs. NLST (the case which GOOG brought – no monetary damages, but to be left alone by NLST).

    GOOG acknowledges use of “Mode C”. This means their thrust will primarily be on claiming non-validity of NLST IP. However NLST IP goes back to March 2004 (according to NLST filings) on the ‘386 patent. This predates MetaRAM IP (which GOOG has now bought in a panic to ensure that a MetaRAM loss while lax in bankruptcy does not wind up screwing GOOG in it’s own case).

    I am not sure about the relationship between GOOG and MetaRAM – it is possible that the “other manufacturer” that GOOG used to manufacture was MetaRAM (?). There are few other players in this space – MetaRAM is dead, and Inphi is a generic component manufacturer and holds little IP – and they are awaiting JEDEC FBDIMM “Mode C” proposed standard results before module makers decide what to do.

    In GOOG vs. NLST (the case which GOOG brought in order to get relief), GOOG has had to furnish it’s server – which has led to discovery has led to the recent NLST vs. GOOG lawsuit (which refers to another NLST patent as well). GOOG has lost claim that NLST patent history should be examined in the proceedings.

    This seems to be the status thus far.

  49. Dec 17, 2009 – the GOOG vs. NLST and more recent NLST vs. GOOG (concerning another patent infringement that NLST alleges following discovery in GOOG vs. NLST) have now been consolidated.

    Evidently both NLST and GOOG wanted the two cases to be combined together. Accordingly new court dates have been set.

  50. Hi spencity,

    I have seen other places where Google had been asking for patent reform, before any hints of this litigation came out. The patent process itself does seem to be much more difficult for smaller businesses. I don’t know enough about the facts behind the Google/Netlist ligitation to decide the strength of their case, but more seems to be coming out.

  51. Hi netlist,

    I’m still not certain of the relationship between Google and MetaRAM at this point either. I did notice a few more patent filings assigned from MetaRAM to Google a week after the initial batch of assignments.

    I do appreciate the updates. Consolidating the cases does make sense, just on the basis of cost and judicial economy themselves. This is getting pretty interesting.

  52. Hi Bill Slawski,

    It’s interesting that Netlist did not react with a knee jerk response to Google’s court filings earlier this year by immediately counter suing. Instead Netlist chose to first pursue legal action against Inphi. It makes since to attempt to prove the legitimacy of patents in question against the smaller company in order to arm itself with court tested evidence. It will much more difficult for Google to claim that the patents in question are invalid.

  53. If “Mode C” usage shows up in a random GOOG server, what are we talking about here ? That nearly all GOOG servers use the infringing “Mode C” ?

    Since “Mode C” is a smoking gun for use of 4-rank/virtual-rank – as it seems to have BIOS report (incorrect) info to the processor so it can be fooled.

    This would mean a lot more servers than claimed – MetaRAM (or was it GOOG – anyone know who said that ?) said there were only a “few” such infringing products manufactured. Does not seem like a few if all GOOG servers are tainted as the server displayed in discovery phase (for GOOG vs. NLST) was ?

  54. I am not totally clear on this, but it seems “Mode C” is related to having (patched) BIOS report the (incorrect) memory info to the processor.

    Some info on JEDEC’s FBDIMM Mode C proposed standard
    http://www.jedec.org/download/search/JESD82-20A.pdf
    http://www.jedec.org/download/search/JESD82-28A.pdf

    Of course, this seems not to be required by the newer NLST HyperCloud memory – which is supposed to work along with other memory in unaltered motherboards.

    But “Mode C” usage is indicative of an attempt to do 4-rank and so is a “smoking gun”.

  55. thank you for this page. Got good info. top question on my mind:

    1. why did metaram shut down ? Couldnt find anything. (my conspiracy theory:
    Did the VCs find out the company was based on a stolen IP ? (metaram established in mid 2006, nlst patent – 7289386 filed mid 2005)

    2. Is Goog collecting metaram patents to have bargaining/negotiating power with nlst ? (way patents are written there is always
    room to overclaim/underclaim patent coverage, althought attorneys try to make it as broad as possible).

  56. Hi spencity,

    While the timing of filing lawsuits may have a strategy to them, I’m not sure that we can read too much into that timing sometimes. With a certain amount of time to file some claims under statutes of limitations, and other times dictated by court rules, someone filing a claim or counterclaim may not always be free to file a case in court exactly when the idea time might be to do so.

    It is interesting that Netlist did first file a suite against Inphi, though.

  57. Hi netlist,

    I’ve been wondering how many Google servers might be using “Mode C” as well, and if they might be affected by the outcome of a settlement or judgment.

  58. Hi Mike,

    You’re welcome. I’m wondering the same things myself. It really was a total surprise to see all of those patent filings assigned to Google. I didn’t realize at the time that there was a hornet’s nest of litigation to go with them.

  59. We don’t know for certain what kinds of memory modules Google uses in its servers, but a recently published study from Google on DRAM errors doesn’t mention any modules of more than 4 GB. The paper does mention that data collected from the study “covers multiple vendors, DRAM capacities and technologies, and comprises many millions
    of DIMM days.”

    In that paper, we’re told this about Google’s systems:

    Our data covers the majority of machines in Google’s fleet and spans nearly 2.5 years, from January 2006 to June 2008.

    Each machine comprises a motherboard with some processors and memory DIMMs. We study 6 different hardware platforms, where a platform is defined by the motherboard and memory generation.

    The memory in these systems covers a wide variety of the most commonly used types of DRAM. The DIMMs come from multiple manufacturers and models, with three different capacities (1GB, 2GB, 4GB), and cover the three most common DRAM technologies: Double Data Rate (DDR1), Double Data Rate 2 (DDR2) and Fully-Buffered (FBDIMM). DDR1 and DDR2 have a similar interface, except that DDR2 provides twice the per-data-pin throughput (400 Mbit/s and 800 Mbit/s respectively). FBDIMM is a buffering interface around what is essentially a DDR2 technology inside.

    The paper is:

    DRAM Errors in the Wild: A Large-Scale Field Study (pdf)

    It was written by Bianca Schroeder from the University of Toronto, and Google’s Eduardo Pinheiro and Wolf-Dietrich Weber. It was presented at SIGMETRICS/Performance’09, June 15–19, 2009, in Seattle, Washington.

  60. quote:
    I’ve been wondering how many Google servers might be using “Mode C” as well, and if they might be affected by the outcome of a settlement or judgment.

    The court asked them to show a “GOOG server” and they show the one with “Mode C” in it. In court filings, GOOG has not mitigated impact by saying “only a few servers are implicated”. Instead they have said they are not denying use of “Mode C”, but the value of NLST’s IP.

    However this is a risky tactic, as the cost of failure would be high (and possibly unacceptable) for GOOG. Which means settlement. The GOOG vs. NLST lawsuit GOOG filed in reply to NLST letter to GOOG may just have been that – to allow them time for a soft landing, esp. if they did not have good answer to NLST letters.

    Does GOOG throw away old servers and continue replacing, or is the error rate such that they wind up replacing them anyway after some months ?

  61. MetaRAM link with GOOG is unclear. My impression was it was MetaRAM which sold “infringing” modules to GOOG (NLST vs. MetaRAM).

    Now it turns out GOOG was the sponsor with component specs and seeking someone to manufacture according to GOOG specs (GOOG vs. NLST court dockets).

    Since MetaRAM (NLST vs. MetaRAM) claims a very small amount of sales, that would not account for the proliferation of “Mode C” in standard GOOG servers (the one GOOG showed when forced by discovery in GOOG vs. NLST). Also MetaRAM says they were not “sales” and were “destroyed”.

    Question is – why would they “destroy” that hardware ?

    From auditor post above:

    On 11/24/09 in Netlist v MetaRAM joint case mgmt statement, MetaRAM disclosed additional comment that it “ceased operations, and prior to then sold only approximately $37,000 worth of DDR3 memory controllers subject to lawsuit. None of those memory controllers were used by MetaRAM’s customers in commercial sales, and instead all were destroyed.” In the following sentence, MetaRAM referenced Google v Netlist as related case. Actions speak louder than words. A reasonable inference is that MetaRAM has taken drastic action to reduce and limit any potential liability from alleged patent infringement. Can you guess the identity of the MetaRAM’s customer, and why $37,000 worth of non-commercial DDR3 memory controllers were destroyed?

  62. quote:
    1. why did metaram shut down ? Couldnt find anything. (my conspiracy theory:

    Yes, not clear to me either (first article below). It could have been:

    – the semiconductor slump (low memory prices) of that time
    – serious issues with the technology not working well
    – patent issues (or realization that NLST had the earlier IP, or a more comprehensive set of related IPs – for example in “embedded passives” – which would allow successful use

    NLST has IP in “embedded passives” which frees up real-estate on the memory module. In addition MetaRAM has IP in “stacked modules” which NLST has criticized for it’s inability to deliver symmetric lines to memory chips:

    http://www.netlist.com/technology/technology.html
    While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency. Stacking components results in unequal cooling of devices, causing one device to run slower than the other in the stack. This often results in module failures in high-density applications.

    The density limitation is solved by proprietary board designs that use embedded passives to free up board real estate, permitting the assembly of more memory components on the substrate. The performance of the memory module is enhanced by fine-tuning the board design to minimize signal reflections, noise, and clock skews.

    This is a presentation by NLST’s Bill Gervasi on NLST’s “embedded passives” (who went on to SMOD and Chairman JEDEC DRAM Packaging Committee):
    http://www.discobolusdesigns.com/personal/IMAPS_netlist_embedded_resistor_reliability_20050125.pdf

    NLST’s new HyperCloud memory modules are pictured in this presentation (pg. 9):
    http://www.scribd.com/doc/23156890/Hyper-Cloud-Press-Presentation-11-24-09New
    Hyper Cloud Press Presentation 11-24-09New
    Date Added 11/25/2009

    Compare that to:

    MetaRAM’s modules – they do seem a bit cluttered (with possibly asymmetrical chip layout ?):
    http://www.ansoft.com/ie/Track2/DDR3%20Memory%20Module%20Design.pdf

    Inphi’s “iMB” buffer and a possible memory module:
    LR-DIMM with Inphi’s iMB™ Component
    http://www.inphi.com/images/productImageLibrary/highRes/Inphi_LR-DIMM_with_iMB_Component_gold.jpg

    The article below suggests volume, power and cost will be hard to reduce. However NLST claimed just that with HyperCloud at Supercomputer Expo – that is, memory density, speed increase (for heavily memory-loaded systems which otherwise have to run at slower speeds), power reduction (since 4-rank may allow you to reduce power for “inactive” memory modules), and ability to present more memory in total than otherwise would be handleable.

    The article below suggests it is a complex thing to get right – it is possible that MetaRAM was not able to get enough space on memory module for enough “decoupling capacitors” etc.

    http://lynnesblog.telemuse.net/292
    Feb 25, 2008
    MetaRAM Busts RAMBUS Stranglehold?
    Snake oil or salvation from former AMD CTO,
    By Lynne Jolitz

    Is the technology innovative? Not likely — it sounds like a combination cache and bank decoder, which is not innovative in the least. In fact, you need 4x the number of components on the DIMM, which means 4x the number of current spikes and decoupling capacitors, even if you put the chips together in the same package. Because you have a fifth chip, you complicate things even more. There is no way you can approach the triple-zero (volume, power, cost) sacred to chip designers with such a design, because one single high-speed high-capacity chip will eventually win out given the proliferation of small expensive gadgets demanding the lowest of volume and power. In a world of gadgets like IPODs, cellphones, laptops, PDAs and the like, cost is very important but *not* the most important quantity. So RAMBUS doesn’t have a lot to worry about here.


    So where does little MetaRAM come in. When technology fails, maybe a clever business model will do. MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it? Given reliability considerations, that also remains to be seen. But the moral of this saga is simple — human memories are longer than memory architectures in this business, and the real puppet-master behind the throne (Kleiner-Perkins) is sure to walk away with the money. I wish I could say the same for the customers.

    http://mobile.chipcrunch.com/Blogs/Startup.Blurbs/Semiconductor.startups.dropping.like.flies.html
    Semiconductor startups dropping like flies
    Written by Maciej Bajkowski
    Tuesday, 14 July 2009

    We profiled MetaRAM in March of last year, shortly after the company emerged from stealth mode. It was backed by several prominent venture capital firms including: Kleiner Perkins Caulfield & Byers, Khosla Ventures, Storm Ventures, and Intel Capital. This just shows you that having prominent VC backing is not a guaranteed indicator of success. Already back then we had a couple of concerns regarding the MetaRAM technology: First, with increasing DRAM frequency, how long would MetaRAM be able to hide the latency of their chipset via clever buffering of reads and writes? Second, it was inevitable that memory controllers would enable support for ever larger amounts of memory, possibly making MetaRAM technology irrelevant? Whether any of these was the actually reason for the company ceasing operations we might never know. The company’s website seems to be down, and as far as I’m aware nobody has been able to reach any of the company representatives for an official comment.

  63. The article below suggests volume, power and cost will be hard to reduce. However NLST claimed just that with HyperCloud at Supercomputer Expo – that is, memory density, speed increase (for heavily memory-loaded systems which otherwise have to run at slower speeds), power reduction (since 4-rank may allow you to reduce power for “inactive” memory modules), and ability to present more memory in total than otherwise would be handleable.

    The article below suggests it is a complex thing to get right – it is possible that MetaRAM was not able to get enough space on memory module for enough “decoupling capacitors” etc.

    http://lynnesblog.telemuse.net/292
    Feb 25, 2008
    MetaRAM Busts RAMBUS Stranglehold?
    Snake oil or salvation from former AMD CTO,
    By Lynne Jolitz

    Is the technology innovative? Not likely — it sounds like a combination cache and bank decoder, which is not innovative in the least. In fact, you need 4x the number of components on the DIMM, which means 4x the number of current spikes and decoupling capacitors, even if you put the chips together in the same package. Because you have a fifth chip, you complicate things even more. There is no way you can approach the triple-zero (volume, power, cost) sacred to chip designers with such a design, because one single high-speed high-capacity chip will eventually win out given the proliferation of small expensive gadgets demanding the lowest of volume and power. In a world of gadgets like IPODs, cellphones, laptops, PDAs and the like, cost is very important but *not* the most important quantity. So RAMBUS doesn’t have a lot to worry about here.


    So where does little MetaRAM come in. When technology fails, maybe a clever business model will do. MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it? Given reliability considerations, that also remains to be seen. But the moral of this saga is simple — human memories are longer than memory architectures in this business, and the real puppet-master behind the throne (Kleiner-Perkins) is sure to walk away with the money. I wish I could say the same for the customers.

    http://mobile.chipcrunch.com/Blogs/Startup.Blurbs/Semiconductor.startups.dropping.like.flies.html
    Semiconductor startups dropping like flies
    Written by Maciej Bajkowski
    Tuesday, 14 July 2009

    We profiled MetaRAM in March of last year, shortly after the company emerged from stealth mode. It was backed by several prominent venture capital firms including: Kleiner Perkins Caulfield & Byers, Khosla Ventures, Storm Ventures, and Intel Capital. This just shows you that having prominent VC backing is not a guaranteed indicator of success. Already back then we had a couple of concerns regarding the MetaRAM technology: First, with increasing DRAM frequency, how long would MetaRAM be able to hide the latency of their chipset via clever buffering of reads and writes? Second, it was inevitable that memory controllers would enable support for ever larger amounts of memory, possibly making MetaRAM technology irrelevant? Whether any of these was the actually reason for the company ceasing operations we might never know. The company’s website seems to be down, and as far as I’m aware nobody has been able to reach any of the company representatives for an official comment.

  64. Just as Inphi (with it’s “iMB” buffer) is now hoping for JEDEC approval and then use by memory module makers, similiarly MetaRAM was hoping to sell the chipset (and make the memory itself also it seems).

    Inphi also had a press release about the “iMB” buffer for Supercomputer Expo. What is not clear is if they actually got it working – since Inphi just sells a buffer chip component.

    Since NLST seems to be claiming 4-rank (and it IS the inventor of 4-rank) then why has it not gone against memory module makers of 4-rank modules before ?

    http://www.cmtlabs.com/quadfbdimm.asp
    The Memory Compatibility Experts
    “Quad-Rank Fully Buffered DIMMs”

    Or is it that NLST has targeted the buffer chip manufacturers (MetaRAM, and now Inphi) ?

    Hynix and SMOD were banking on MetaRAM at that time:

    http://www.digitimes.com/news/a20080820PR200.html
    Hynix demonstrates DDR3 R-DIMM using MetaRAM technology at IDF
    Press release, August 20; Esther Lam, DIGITIMES [Wednesday 20 August 2008]

    Hynix using MetaRAM “chipset” – MetaRAM memory module has Hynix logo on it (page 10):
    http://www.ansoft.com/ie/Track2/DDR3%20Memory%20Module%20Design.pdf

    http://www.epn-online.com/page/new56803/smart-launches-8gb-dual-rank-ddr2-rdimms.html
    SMART launches 8GB dual-rank DDR2 RDIMMs
    04/03/2008

    The new module combines SMART’s new DDR2 packaging technologies with the MetaRAM chipset architecture.

  65. I am trying to understand how GOOG use of “Mode C” is an indicator of infringement. Is use of 4-rank infringement ?

    NLST was the originator of 4-rank. Yet it was made into a JEDEC standard.

    Anyone know the history of how that worked.

    But 4-rank is a JEDEC standard – if NLST was the innovator of 4-rank, how did that become standard ?

    Does this mean NLST disapproves of it – including all the other manufacturers who make it ?

    But lacking legal resources it is only going after a few players first ?

    Bill Gervasi (now at SimpleTech) was at NLST at the time of 4-rank development.

    He was also Chairman of JEDEC committee on memory modules.

    http://www.docmemory.com/page/news/showpubnews.asp?title=What+is+a+4-Rank+DIMM+Memory+%3F&num=128
    For a successful implementation of 4 Rank DIMM memory, System designers need to be aware of which processors and memory controllers are
    enabled to support four-rank modules. Finally, it is necessary to note that byte five of the serial presence detect (SPD) describes the number of ranks on
    a module

    Many system designers are now are rushing to find out what “4 rank memory” is all about ?. We have the pleasure to introduce Bill Gervasi, the
    inventor/initiator of the “4 rank memory”, to furthur explain the details technical details regarding 4 Rank DIMMs.

    4 rank modules, recently approved by JEDEC, address this gap by allowing up to 72 DRAMs per memory slot, enabling the 32GB per CPU
    capacity goal using commodity 512Mb DRAMs. When the 1Gb DRAMs are finally in mass production, 4 rank modules double the reach again
    to 64GB per CPU.

  66. netlist, thank you for sharing all of your research and reasoning on this thread. your efforts and generosity are very much appreciated.

  67. And Bill, thank you for starting this thread about Google, MetaRAM, the patents, and the lawsuits. This is the best thread of information about Netlist that I know of. Cheers.

    Happy New Year to all, and may all Netlist investors prosper.

  68. i just reread the entire thread. want to thank auditor too, and everyone else who contributed to this thread. didn’t mean to take anyone for granted. thanks all, very helpful info & discussion.

  69. An update on the various court cases.

    Looks like NLST vs. Inphi (and retaliatory Inphi vs. NLST) are on track.

    GOOG vs. NLST and NLST vs. GOOG (inspired by discovery in GOOG vs. NLST) have been consolidated (request of both GOOG and NLST) – both to be heard by Judge Armstrong.

    NLST extended time to GOOG to answer to complaint by Jan 29, 2009.

    Meanwhile, NLST vs. MetaRAM (and retaliatory MetaRAM vs. NLST – although MetaRAM does hold some IP, unlike Inphi) have both been retracted by both parties.

    Since MetaRAM is in bankruptcy, it would want to end the case – in any case MetaRAM vs. NLST wouldn’t have much meat if they no longer own the patent they are asserting (though perhaps they could still assert harm caused by NLST while MetaRAM owned those patents).

    NLST probably can’t get much from a bankrupt MetaRAM – although they MAY have been able to block the transfer of IP from MetaRAM to GOOG (since NLST had potential recoveries to make from MetaRAM estate in case of win against them for infringement).

    So is this related to a gradual “understanding” in the NLST vs. GOOG case – not necessarily for settlement, but for how the case should proceed (as usually happens between two opposing legal teams – i.e. they agree on what terms the fight will proceed).

    Reasons why NLST would retract case against MetaRAM
    – removed MetaRAM vs. NLST (minor inconvenience that it maybe)
    – reduces court costs and whittles away nonessentials (since moral victory against MetaRAM less interesting than against still healthy GOOG or Inphi) – plus same boutique lawyer team handling all cases (with allied legal firm as well)
    – having retraction by MetaRAM may help them slightly in fight against GOOG (to neutralize GOOG use of MetaRAM-like arguments – since GOOG now holds MetaRAM’s IP).

    Reasons why MetaRAM (while privately held shares, still a limited company ?) would retract case
    – is in bankruptcy – limited options
    – no real case retaliatory case against NLST (esp. true if MetaRAM folded partially because of that understanding – that they had weak hold on IP)

  70. Does anyone know the answers to any of these questions?

    1)Why did Google originally decline to use Netlist’s product, and instead order products from MetaRAM?

    2)Why did MetaRAM declare bankruptcy, and are they planning to emerge from bankruptcy and continue as a private, limited company? If so, what will their business be?

    3)Google is claiming that Netlist’s patents are “invalid”. In what way? What evidence or reasoning supports this argument?

    4)Reportedly, neither Google nor Inphi are seeking monetary damages from Netlist, but Netlist is seeking monetary damages from Google and Inphi. Does this fact suggest that Netlist has the stronger cases against Google and Inphi?

    5)When is it likely that Netlist’s new product “Hypercloud” will complete trials by OEMs, be approved and certified, be ordered in great volumes, and start generating significant earnings for Netlist?

    6)How might Netlist be negatively affected by adverse judgments in the two court cases, and by the JEDEC committee’s impending decision on memory product standards?

    7)If Google loses or settles the case with Netlist, is Google likely to become a paying customer of Netlist?

    My thanks to anyone for their thoughts on, or answers to, these and related matters.

  71. Two more questions and thoughts:

    8)If MetaRAM is planning to emerge from bankruptcy and continue as a private company, why would they sell their many patents to Google (and be left with no IP) unless their patents do in fact infringe on Netlist’s patents, and are more of a liability than an asset going forward?

    9)If MetaRAM’s many patents do infringe on Netlist’s patents, why did Google quickly buy them all from a bankrupt MetaRAM? If MetaRAM’s patents infringe on Netlist’s patents, they should be useless to Google as a legal defense in the court case with Netlist, as bargaining leverage with Netlist, and as a basis for Google or its contractors to manufacture memory products as an alternative to, and competitor against, Netlist’s memory module solution for servers.

    Since Netlist sued MetaRAM over MetaRAM’s patents allegedly infringing on Netlist’s patents, Google must know that Netlist will sue Google if Google ever tries to use MetaRAM’s patents to manufacture memory products.

  72. 1)Why did Google originally decline to use Netlist’s product, and instead order products from MetaRAM?

    From what we know now from court dockets – GOOG has an internal hardware group which wanted to MANUFACTURE memory modules. They discussed with various people (including NLST) about manufacturing memory modules according to GOOG specs and components. At that time NLST may have revealed the stuff they were able to offer (or may have been in process of doing – since NLST had that lull while they transitioned to China factory). In either case GOOG may have felt NLST unable to deliver at that time – plus GOOG may have wanted to do it themselves (given they had their own team inside GOOG).

    Eventually they wound up using other suppliers.

    This by itself does not reflect badly on NLST. What it does reveal however is that GOOG was far more (complicit) than an innocuous buyer for memory from MetaRAM or other (as I was assuming earlier). Thus a direct infringer.

    2)Why did MetaRAM declare bankruptcy, and are they planning to emerge from bankruptcy and continue as a private, limited company? If so, what will their business be?

    They had the support of INTC and others (basically supplying the buffer chip – like Inphi is wanting to do now). Now MetaRAM claims (in court dockets) that they only sold like $37K worth of goods (?) and “destroyed” the rest – so they aren’t infringing NLST stuff (!).

    Inphi is doing similar as MetaRAM (except they only make the buffer chip – while MetaRAM had buffer chip plus ability to create memory module). However as pointed out above, MetaRAM may have used “stacking” and such means which NLST looks askance on – because of it’s asymmetric heat dissipation and line lengths (asymmetric delay on lines).

    3)Google is claiming that Netlist’s patents are “invalid”. In what way? What evidence or reasoning supports this argument?

    This is standard boilerplate language for anyone first response to any patent claim – you can see it in all the patent cases.

    You will note GOOG “rushed” to court on NLST “letter”. This is because GOOG probably saw no (simple) answer to NLST claims in that letter – it would inevitably lead to complex arguments. So GOOG chose to take it to court (in GOOG vs. NLST). That court case wound up costing GOOG – they had to turn over a GOOG server to NLST – which resulted in discovery of “Mode C” usage and data for NLST. NLST already had counterclaims in GOOG vs. NLST, but they probably were waiting for additional data from this discovery – which they used in NLST vs. GOOG (which is more recent).

    Another advantage for GOOG in going to court is it establishes an orderly method to deal with this “threat”. Since it affects the health of GOOG’s entire server infrastructure (since a typical GOOG server is using “Mode C” which is a smoking gun for “4-rank” usage), it was an essential asset to protect. Now in court proceedings, GOOG has the luxury of doing things in an orderly manner – no tension – if they are weak they settle and pay in an orderly way without any threat to GOOG’s structure. Plus they have option to do a buy deal with NLST (if NLST HyperCloud is that superior).

    Circumstantial evidence suggests, GOOG purchase of MetaRAM’s assets is a ploy to gain SOME leverage. However as you have seen the MetaRAM cases have been voluntarily retracted by both NLST/MetaRAM – so this may affect GOOG adversely in that those cases don’t help it much in discovery or issues against NLST.

    MetaRAM has significant IP – however it is IP in “stacked” modules and stuff which may or may not overlap NLST. Plus NLST has earlier (March 2004 antecedents) in the relevant patents.

    Note also NLST position is significantly different from a year ago – at that time, even if GOOG wanted they could not have done a deal with NLST (as NLST was still going through transition to chinese factory and move off commodity memory into these high margin products).

    4)Reportedly, neither Google nor Inphi are seeking monetary damages from Netlist, but Netlist is seeking monetary damages from Google and Inphi. Does this fact suggest that Netlist has the stronger cases against Google and Inphi?

    NLST IS seeking damages, treble damages (for wilful violation etc.). This by itself doesn’t mean they have a “stronger” case.

    The reason GOOG hasn’t claimed damages is that the tone of GOOG vs. NLST is to “please protect us from NLST” – as stated above it is basically a structured arena where GOOG can safely deal with this problem in a controlled way – i.e. if it works out good if not pay.

    The reason Inphi hasn’t claimed damages, is that they have a (some would say) frivolous suit (retaliatory). Secondly they have not been damaged by NLST yet. In any case Inphi is a component maker which is not exactly focused on this niche and it’s IP is weak in this area.

    On a related note, John Smolka (former Inphi employee) joined NLST recently (from SEC filing on awarding of options).

    5)When is it likely that Netlist’s new product “Hypercloud” will complete trials by OEMs, be approved and certified, be ordered in great volumes, and start generating significant earnings for Netlist?

    Someone else may have better insight into this.

    6)How might Netlist be negatively affected by adverse judgments in the two court cases, and by the JEDEC committee’s impending decision on memory product standards?

    JEDEC committee is probably conflicted, because their standard conflicts with NLST. This means MU and others will not be using Inphi buffer chips. So basically alternative to Hypercloud is on ice until JEDEC decides how to proceed.

    NLST will be negatively affected if it “loses” the court cases – which is unlikely given NLST’s strong position in this area – i.e. second to none. If there is overlapping IP – then there is a settlement. In any case, there are no real “competitors” left in this area. MetaRAM was the only one who was seriously specialized in this area (and a supplier of memory buffers), plus it has some IP. Inphi does not come close. GOOG is a serious player, but it too has weak IP in this area (only the MetaRAM IP they just bought). Plus specifically in “4-rank” (i.e. “fooling” the processor/memory controller into thinking there is less memory than really is – is specifically a NLST patent having antecedents to March 2004). Plus there is a history of leakage – from Texas Instruments leakage to JEDEC committee, to MetaRAM, to GOOG discussions with NLST prior to making their own memory that fits into a “story”. Bill Gervasi – inventor of 4-rank while at NLST was later head of JEDEC committee – so there is probably some promiscuous employment (given such a small niche area).

    7)If Google loses or settles the case with Netlist, is Google likely to become a paying customer of Netlist?

    It is unlikely GOOG will “lose” the case – that would mean shutting down the GOOG network. It’s not like GOOG can’t pay any price that is required – so more likely is GOOG will eventually settle – either for cash sum, but more likely (to escape black eye of “do no evil” motto violation) they would opt for some “neutral” thing like overpaying for NLST memory or something. Or if GOOG is confident in own manufacturing (some have suggested their inhouse hardware division is not exactly all that great) they may license then.

    Of course such a decision would have devastating consequences on the JEDEC FBDIMM “Mode C” proposed standard.

    GOOG would probably like there to be a standard – for better pricing (since it is a big consumer of memory).

    So one option (best for GOOG) would be some arrangement where NLST IP is allowed by NLST to become JEDEC standard – in return for something or other (i.e. shades of RMBS).

  73. netlist,

    Thank you very much for your fast and detailed reply. I’m glad you’re on this thread. All the best.

  74. 8)If MetaRAM is planning to emerge from bankruptcy and continue as a private company, why would they sell their many patents to Google (and be left with no IP) unless their patents do in fact infringe on Netlist’s patents, and are more of a liability than an asset going forward?

    Unlikely that MetaRAM would emerge from bankruptcy – usually companies go into bankruptcy to shed debt. In many cases the management can continue (if resurrected) under new owners. In MetaRAM’s case the management WERE the owners. So it is unlikely to emerge AS MetaRAM.

    However it lives on as GOOG-owned MetaRAM IP. Which GOOG will probably use to bolster it’s position against NLST, and possibly for future dealings with companies (since patents tend to get used as currency as well – if sued, countersue with patents other may be infringing – given the state of excessive issuance of patents in overlapping areas).

    After sale of IP to GOOG, MetaRAM assets are further reduced, so “MetaRAM” of old probably will not emerge.

    Think now of GOOG as the new “MetaRAM”.

    9)If MetaRAM’s many patents do infringe on Netlist’s patents, why did Google quickly buy them all from a bankrupt MetaRAM? If MetaRAM’s patents infringe on Netlist’s patents, they should be useless to Google as a legal defense in the court case with Netlist, as bargaining leverage with Netlist, and as a basis for Google or its contractors to manufacture memory products as an alternative to, and competitor against, Netlist’s memory module solution for servers.

    Well having those MetaRAM patents (on the cheap) is probably better than appearing in court without pants on.

    Since Netlist sued MetaRAM over MetaRAM’s patents allegedly infringing on Netlist’s patents, Google must know that Netlist will sue Google if Google ever tries to use MetaRAM’s patents to manufacture memory products.

    GOOG is not trying to “win” the case with the MetaRAM patents – it is just slightly “better” to have them. That is, can perhaps get away with less violation issues, or pressurize NLST on other fronts as nuisance.

    However, note that GOOG situation is not symmetric with NLST. GOOG is an existing violator – so is in for some “damages”. Also if there is threat of treble damages. Not that the money will be of great concern to GOOG with it’s billions – but still as lawyers, GOOG attorneys will seek to limit damage to GOOG and avoid jury trial at the last minute.

    Language is typical for such cases:

    ..Google’s infringing activities in the United States and this District include it’s use of 4-Rank FBDIMMs in it’s server computers and contributing to and/or inducing others to make, use, sell, and/or offer for sale such 4-Rank FBDIMMs, and/or components thereof which lack any substantive non-infringing use.

    ..Google’s infringement of the ‘912 patent is wilful and deliberate ..

    ..Netlist be awarded damages adequate to compensate Netlist for ..

    ..That the court award treble damages to Netlist for the unlawful practices described in this Complaint.

    ..That the court render judgement declaring this to be an exceptional case.

  75. netlist,

    Thanks again for your most recent post. I learned a lot of good info from it.

    Overall, it seems that Netlist is in the best position. In contrast, Google and Inphi (and MetaRAM and Texas Instruments) seem to have engaged in questionable conduct, but Netlist has not, apparently.

    And the fact that Netlist has a brand-new, potentially “breakthrough” product in an important niche of the emerging cloud-computing, and at a time when there are no real or strong competitors, suggests that Netlist should prosper significantly in 2010 — after a few months of resolving the current conflicts.

    And since Netlist required 1-2 years and tens of millions of dollars to develop Hypercloud, it is unlikely that any serious competition to Hypercloud will appear for at least a year.

    Like you, I have been thinking about Google’s founding principle and solemn commitment to “do no evil”. They appear to have violated their own values in their dealings with Netlist, so it will be interesting to see if Google redeems itself by compensating Netlist properly — eventually.

  76. netlist,

    Thanks also for your fast and detailed reply to questions 8 and 9.

    I got a good laugh from your witty line about Google buying MetaRAM’s patents to avoid “appearing in court without pants on.”

    I agree with you that Google’s recent purchase of patents by MetaRAM may help Google a little, but most of Google’s alleged misconduct occurred while it did not own MetaRAM’s patents. As you know, Google’s current ownership of MetaRAM’s patents will not give Google “retroactive” protection against Google’s alleged misconduct when it did not own MetaRAM’s patents.

    So I also agree with you that Google appears to have knowingly and willfully infringed against Netlist’s legal rights — and therefore will eventually have to compensate Netlist to some degree. The extent of damages to Netlist and of compensation by Google is what the court will determine.

    Also, the court will realize that Google did not invent anything related to MetaRAM’s patents, was not the original filer or owner of the patents, and only recently rushed to purchase MetaRAM’s entire list of patents to try to protect itself from its prior misconduct and current legal liabilities.

    Under these circumstances, I doubt that the court is going to look very favorably on Google’s belatedly acquired, “second-hand” patents.

    By the way, I also agree with your earlier doubts about the suspicious claims by MetaRAM that it sold only $37K worth of product and destroyed the rest of production (why was that, hmm?), and therefore, committed little or no infringement against Netlist.

    Aside from wanting justice for Netlist (in court and in the market), I will be interested to eventually learn convincing explanations for many of the “mysteries” in this story.

  77. Two corrections to my recent remarks:

    1)It’s my understanding that MetaRAM claimed that it destroyed all of the products worth $37,000 that it sold, but which were never used commercially by the buyer (thought to be Google).

    2)There is at least one serious competitor to Netlist’s “Hypercloud” product: Cisco. But Netlist’s product attaches directly to the memory, while Cisco’s product attaches to the motherboard. Apparently this difference gives Netlist’s product an advantage over Cisco’s product in performance.

  78. If NLST is going after GOOG for “4-rank” usage, this is a JEDEC standard and plenty of other memory makers make 4-rank memory modules.

    Or is it in some specific way that those do not infringe – or is it that they ALL infringe, it’s just that NLST has chosen the fight with GOOG (being most prominent and best player to get early resolution out of in court).

    If so, that could mean JEDEC usage of NLST IP, and other memory module makers would have to fall in line if GOOG concedes ?

  79. netlist,

    If and when you can, would you please explain what you think the positive and negative effects on NLST would be if JEDEC adopts Netlist’s IP and Hypercloud technology as the JEDEC standard?

    Thanks.

  80. If and when you can, would you please explain what you think the positive and negative effects on NLST would be if JEDEC adopts Netlist’s IP and Hypercloud technology as the JEDEC standard?

    I don’t know how something being a “standard” relates to something being “proprietary”. On the face of it JEDEC being a standards body just specifies a common way of doing something – and is a middle player to do that for a disparate and competitive group of companies.

    It may not have anything to do with whether it is proprietary or not. Generally JEDEC would want to standardize on something that does NOT do something proprietary (to minimize costs of going with that standard).

    In fact RMBS (Rambus) was accused of being part of early negotiations in standards setting process for DRAM and using that prior knowledge to patent stuff ahead of the standard process – in effect STRENGTHENING it’s hold on what would eventually become the standard. Essentially a way of herding competing manufacturers into a corner (having committed to a certain way of manufacturing) so it could squeeze out royalty payments later. In effect harming the whole purpose of the standards setting body – to make things easier (and cheaper) for the industry.

    http://en.wikipedia.org/wiki/Rambus
    July 30, 2007, the European Commission launched antitrust investigations against Rambus, taking the view that Rambus engaged in intentional deceptive conduct in the context of the standard-setting process, for example by not disclosing the existence of the patents which it later claimed were relevant to the adopted standard. This type of behaviour is known as a “patent ambush”.

    Given this context it seems reasonable that JEDEC would wait before it finalizes a proposed standard – unless of course it is assured that the license fees related to that standard are going to be “reasonable” (by NLST).

    From above link:
    February 5, 2007, U.S. Federal Trade Commission issued a ruling that limits maximum royalties that Rambus may demand from manufacturers of dynamic random access memory (DRAM), which was set to 0.5% for DDR SDRAM for 3 years from the date the Commission’s Order is issued and then going to 0; while SDRAM’s maximum royalty was set to 0.25%. The Commission claimed that halving the DDR SDRAM rate for SDRAM would reflect the fact that while DDR SDRAM utilizes four of the relevant Rambus technologies, SDRAM uses only two. In addition to collecting fees for DRAM chips, Rambus will also be able to receive 0.5% and 1.0% royalties for SDRAM and DDR SDRAM memory controllers or other non-memory chip components respectively.

    This would suggest the JEDEC CAN wind up in a position where they are pushing a standard which is heavily tied to one company’s IP – resulting in allied royalty payments.

    So on the face of it – no, it would not harm NLST if JEDEC adopts NLST-related technology as a standard. In fact it would HELP NLST – since it would herd more folks into doing things that infringe NLST IP – thereby increasing the potential royalty collection by NLST in the future (once IP issues are resolved in court).

    JEDEC NOT adopting NLST-related stuff as standard doesn’t help NLST – since it means the industry is doing something that is unrelated (and thus un-royalty-collectable by NLST).

  81. netlist,
    thank you for for clarifying this complex issue. you seem to have an excellent grasp of technology as well as legal issues. I have a question that you might be able to shed some light on. Since netlist has to buy RAM on open market, how can you be competitive compared to DRAM manufactureres such as Elpida and Micron ?

  82. Since netlist has to buy RAM on open market, how can you be competitive compared to DRAM manufactureres such as Elpida and Micron ?

    I don’t know enough about this to comment – but it seems NLST is in similar situation as other memory module makers. Which include those that do not make their own memory chips:
    STEC – Simple Tech
    SMOD – Smart Modular

    Also it seems these memory makers themselves can be buyers of NLST-like tech. For example Hynix (one of major memory chip makers) had licensed MetaRAM:

    http://lynnesblog.telemuse.net/292
    Feb 25, 2008
    MetaRAM Busts RAMBUS Stranglehold?
    Snake oil or salvation from former AMD CTO,
    By Lynne Jolitz

    MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it?

    http://www.digitimes.com/news/a20080820PR200.html
    Hynix demonstrates DDR3 R-DIMM using MetaRAM technology at IDF
    Press release, August 20; Esther Lam,
    DIGITIMES [Wednesday 20 August 2008]

    Intel will demonstrate the world’s first 16GB 2-rank DIMM from Hynix, using the MetaRAM DDR3 chipset at IDF. Intel will also demonstrate a server with 160GB using Hynix DDR3 R-DIMMs and Meta SDRAM technology, Hynix said.

    So memory chip makers also DO deals with companies like NLST – in order to build more complicated modules (that include more than just memory chips).

    Here is a list of memory chip manufacturers:
    http://www.interfacebus.com/memory.html

    This article lists the dominant memory chip makers (not the same as memory module makers):

    http://news.cnet.com/8301-13924_3-10057284-64.html
    October 3, 2008 4:00 AM PDT
    Memory chipmakers face survival test
    by Brooke Crothers

    Hynix – in financial trouble due to extended drought in memory during last 2 years (low prices, low margins). However it is linked to S.Korean government and can get bailout.

    Samsung

    Qimonda AG (Infineon) – “ailing”

    MU – “largest U.S. maker of memory”

  83. question:
    Since netlist has to buy RAM on open market, how can you be competitive compared to DRAM manufactureres such as Elpida and Micron ?

    So short answer is that companies like NLST have to buy memory chips from those companies, but if those companies want to make memory modules they have to license from companies like NLST – or buy buffer chips (like they were planning to do from Inphi, and earlier MetaRAM).

    From the chart they show, you can see that the companies which license their technology (i.e. their IP – intellectual property) are the ones with greatest gross margins.

    http://seekingalpha.com/article/16968-gross-margin-kings-memory-chip-manufacturers

    Gross Margin Kings – Memory Chip Manufacturers
    by: Robert Zenilman September 15, 2006 | about: CY / SNDK / MU / SFUN / RMBS / IDTI / ISSI / MOSY / RMTR / SSTI / STEC / STAK

    Rob Zenilman submits: Within a specific sector, gross margins can differ dramatically, due to the different nature of their businesses. Among the memory chip manufacturers tracked here, gross margins ranged from 24.8% (STEC) up to 85.7% (RMBS). The companies that have drastically higher gross margin are what I like to call “gross margin kings”.

    What separates out the companies with the four highest gross margin rates is that they earn money by licensing out their technology. 88% of Rambus’ revenue is from licensing, 100% for MoSys (MOSY), 56% for Saifun Semiconductors (SFUN), 100% for Virage Logic (VIRL) and 25% for Staktek Holdings (STAK).

    However, having high gross margins is no guarantee of profitability. Of the four companies here with gross margins over 70% (and that derive most of their revenue from licensing) – only Rambus has a positive P/E of 62.84.

  84. Can anyone here provide any details about the kind of information that NLST CEO Hong will probably discuss in his “investor presentation”? Thanks.

    Netlist to Present at the Needham Growth Stock Conference in New York City

    IRVINE, Calif., Jan. 6 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that CEO C.K. Hong is scheduled to make an investor presentation at the Needham 12th Annual Growth Stock Conference on Thursday, January 14, at 2:30 pm Eastern Time. The conference is being held January 12-14, at The New York Palace in New York City.

    The presentation will be accessible by live webcast in the Investors section of the Netlist website at http://www.netlist.com. A replay of the webcast will be available on the Netlist website for 30 days.

  85. Another twist that makes things even more interesting here.

    According to the USPTO Assignment database, MetaRAM has licensed the use of the method in patent 7,472,220 to Netlist, and Netlist has licensed the use of the method in patent 7,289,386 to MetaRAM.

    Memory module decoder

    Interface circuit system and method for performing power management operations utilizing power management signals

    It appears that the execution date on the conveyances was December 21, 2009, and the recording of the assignments took place on January 4, 2009.

    The ‘386 patent appears to be at the heart of some of the litigation between Google and Netlist, and between Netlist and MetaRAM. Part of a settlement between Netlist and MetaRAM? I don’t know for certain. Might be interesting to listen in to the live webcast that netlistfan mentioned in the comment above this one.

  86. This is seriously interesting. Thanks !!

    USPTO assignment search page – entering patent number to search:
    http://assignments.uspto.gov/assignments/?db=pat

    Reveals that:

    7472220 – MetaRAM license to NLST ..
    http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7472220&pub=&asnr=&asnri=&asne=&asnei=&asns=

    7289386 – NLST license to MetaRAM ..
    http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7289386&pub=&asnr=&asnri=&asne=&asnei=&asns=

    So a cross licensing arrangment – and this fits in with recent withdrawal of cases by both parties in NLST vs. MetaRAM and MetaRAM vs. NLST (as reported above).

    The USPTO assignment info for each patent shows:
    Conveyance: LICENSE (SEE DOCUMENT FOR DETAILS).

    Compare with the patents that were sold to GOOG (as reported above) – for example:
    7580312 – Power saving system and method for use with a plurality of memory circuits (
    http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7580312&pub=&asnr=&asnri=&asne=&asnei=&asns=

    These have:
    Conveyance: ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).

    That is, the “ownership” (ASSIGNORS INTEREST) is transferred to GOOG.

    In an earlier post I had wondered HOW NLST allowed MetaRAM to sell it’s IP – since NLST was potentially due money (in case of eventual win in court against MetaRAM).

    Now it seems something similar to that DID happen – i.e. either:

    – NLST signalled to MetaRAM to keep certain IP in hand (while it could sell other stuff it was not interested in – like the IP on “stacked memory” which NLST has claimed has serious asymmetry issues – search above for “stacked”).

    – MetaRAM recognized which of it’s IP would be valuable in eventually getting NLST off it’s back, and retained THOSE patents.

    The date of assignment for patents sold to GOOG are 09/11/2009:
    Assignor: METARAM, INC. Exec Dt: 09/11/2009

    While the ones licensed to NLST are dated 12/21/2009:
    Assignor: METARAM, INC. Exec Dt: 12/21/2009

    So MetaRAM KNEW as early as 09/11/2009 that it would not need THOSE patents – and which ones NOT to sell to GOOG (!).

    The other two alternatives are left unanswered:

    – why NLST did not insist that all of MetaRAM’s IP be given (or sold) to NLST – maybe NLST wasn’t interested in all of it ?

    – why GOOG didn’t overpay to buy ALL of MetaRAM’s IP – including the patents that MetaRAM retained, or did MetaRAM decline since it needed those to fend off NLST for an eventual settlement, so it’s bankruptcy proceedings could proceed unhindered.

    – what happens to the patents MetaRAM has retained (not sold to GOOG) – like the cross-licensing patents. Can NLST claim interest in who gets the patents in bankruptcy proceedings since it is (now) a licensee ?

    As well as this question:

    – does MetaRAM hold other patents that it has NOT sold to GOOG ? Would be hard to believe GOOG would not want the most NLST-specific ones but were there MetaRAM patents that GOOG did not buy – which MetaRAM is still holding on to ? Why – since there is little value in retaining those patents – as a company, those assets will have to be liquidated during bankruptcy.

    As conjectured above – the NLST/MetaRAM mutually agreed dismissal of cases – NLST vs. MetaRAM and MetaRAM vs. NLST bode well for the strategy the NLST lawyers were adopting. One of conciliation with a defeated enemy in order to positiong better for the fight against the larger one:

    – since not much extractable from a bankrupt company, NLST can at least make sure info from discovery etc. in these cases is not available to help GOOG in the NLST/GOOG cases.

    Now it seems NLST DID get something from that settlement as well – broader coverage thanks to help from MetaRAM patents.

  87. searching the USPTO assignment search page – entering METARAM as “Assignor”, then clicking the “METARAM, INC” name that appears:
    http://assignments.uspto.gov/assignments/q?db=pat&asnrd=METARAM,%20INC.

    shows the patents that MetaRAM has assigned to others.

    http://assignments.uspto.gov/assignments/q?db=pat&asned=NETLIST,%20INC.
    The patents that assigned to NLST. Only the 7472220 patent appears for Netlist.

    MetaRAM patents being transferred to GOOG number around 50 + 7 (patents or filings).
    http://assignments.uspto.gov/assignments/q?db=pat&asned=GOOGLE%20INC.&page=15

  88. quote:
    So MetaRAM KNEW as early as 09/11/2009 that it would not need THOSE patents – and which ones NOT to sell to GOOG (!).

    Another possibility is that MetaRAM sold off it’s IP without too much thought – but because they were being sued by NLST, and they were in turn retaliatory-suing NLST based on 7472220 patent, they HAD to retain that. So everything else went on sale, but they had to keep that in hand in order to retain some standing in court case against NLST (which was their counterweight to NLST’s suit against them).

    When MetaRAM/NLST settled, this patent was lying around, so it became part of the eventual settlement – i.e. cross-licensing between the two.

    So maybe this is the (simpler) interpretation.

    Question is, why did MetaRAM license the NLST patent then ? Or is it standard procedure to cross-license this way – or is this standard “closure” to the case by making each party “whole” by giving them the license to patent which nullifies the case (so for instance the same type of suit cannot be filed again – either by NLST against MetaRAM or by MetaRAM against NLST) – and has nothing to do with whether MetaRAM intends to use the NLST patent (probably not).

  89. Bill,

    Thanks for making and sharing your latest discovery. Very interesting indeed.

    And netlist,

    Thanks for building on Bill’s discovery by sharing your related discoveries and by thinking through the implications and possibilities.

    Great detective work, you two. The mystery slowly unfolds…

  90. Thanks, netlist and netlistfan,

    I’m very thankful for the comments and questions and information being shared here by everyone.

    I’m still wondering how the licensing of technology to MetaRAM might affect the litigation between Netlist and Google, if at all.

  91. from Briefing.com this evening:

    “4:49PM NetList files for $30 mln mixed securities shelf offering”

    NLST closing price today was $5.21. Now it’s $4.82 after-hours. Ouch!

    My guess is that NLST will trade in a range between $4.50 and $6.50 for 3 to 6 months, and won’t rise steadily or significantly until the lawsuits are settled, the OEMs test and approve HyperCloud, the latter gets certified, and JEDEC decides the standardization question.

    Over the next 1 to 2 years, I think NLST and HyperCloud will prosper nicely. But tonight’s share-dilution (on top of the many other obstacles to NLST that I just mentioned) will probably suppress the share price for several months.

    Other opinions?

  92. http://www.netlist.com/investors/SEC_filings.htm

    The above link will take you to netlist.com, where you can download NLST’s S-3 filing dated today, 1-11-2010, in the format that you prefer. It confirms that NLST has filed with the SEC it’s plan to sell $30 million in mixed securities in a “shelf offering,” which may be sold over an unspecified duration.

  93. For an explanation of the “dilution”, please read the following on the NLST yahoo board (poison-pill provisions possibility), since with 10M shares, a possible hostile takeover by GOOG would not be out of the question:

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=11805&mid=11887&tof=2&frt=2#11887
    Re: OFFERING, SELL SELL SELL .. part1

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=11805&mid=11888&tof=1&frt=2#11888
    Re: OFFERING, SELL SELL SELL .. part1

  94. netlist,

    Your “poison pill” hypothesis seems very plausible to me too. Netlist is especially wary of Google right now, because of Netlist’s lawsuits with Google, but Netlist is also probably wary of being vulnerable to a “premature” buy-out or hostile take-over from HP, Dell, IBM, Cisco, or any number of bigger companies. And almost everyone is bigger and richer than Netlist.

    Since Netlist’s new product, HyperCloud, has real potential to be a blockbuster that could take Netlist’s share price to fantastic heights (and since Netlist has worked hard and long to remake itself, and now hopes to finally achieve the potential that they never enjoyed since their IPO in 2006), I think that Hong and Netlist’s other big inside owners must really want Netlist to have a chance to succeed and grow on its own, and not have its independent life “taken away” prematurely in what is really its infancy. A premature buy-out or take-over would be emotionally painful to Netlist management and employees, I think.

    In addition to the emotional aspect, there is the financial aspect: Hong et al obviously would prefer to sell their shares in a few years at $500, not now at $5!!!

    So now seems like a perfect time to obtain the legal right to sell up to $30 million worth of securities, in the manner and timing of Netlist’s choosing. Why? First and most important, for self-defense against Google (and others), as you rightly point out. And second, because Netlist’s share price has been range-bound (and likely will continue to be) until the obstacles I mentioned a few comments above get resolved. This S-3 filing would cause a price drop at almost any time, so it’s best get it over now, when the price is likely to be trading sideways for another 3 to 6 months anyway. Then, when Netlist has removed the obstacles that are currently in its way, and the orders come in and the payments come in, Netlist will likely be clear for take off.

    Lastly, I just want to highlight that Netlist’s S-3 filing is not an offer to sell shares, and it’s not an obligation to sell shares — it will be (once approved by the SEC) just an option to act that sits on the “shelf,” waiting for Netlist to use if and when Netlist needs or wants to use it.

  95. Thanks again, netlist and netlistfan,

    There does seem to be the potential for Netlist to grow into a remarkable company, but they also have to appear to be an attractive target at this point. If the filing can help them, then it sounds like a good move to take. I wasn’t sure what kind of reactions this post might have when I first posted it, but I didn’t expect the mystery to start unraveling the way that it has. Thanks again, for keeping this post up to date with the latest news.

  96. Today’s call was very depressing. I have accumulated quite a bit and was hoping to hear about OEM
    qualifications in today’s Needham talk. All I heard was 6 months to revenue and no concrete OEM
    announcement nor any talk of lawsuit settlement.
    Netlist and others – Can you help me understand why it takes so long for qualification ? Also are
    the lawsuits preventing quick adoption of hypercloud ?
    Seems to have become hype-o cloud from hypercloud !

  97. joeq,

    I share your discouraged feeling and your financial pain. I agree with you that the NLST Investor Presentation was very disappointing. So much so, that I sold all my shares of NLST, at a painful-but-bearable loss, so that I could “let go and move on.” I hope to make the loss back elsewhere.

    Like many others, I jumped too soon and too much into NLST because of all the great descriptions of its new product, HyperCloud, in November. But after watching a lot of my money drop and drift for two long months (while other stocks are rising), the thought of having my money falling or flat for another 6 months (or more) prompted me to sell, and switch to other stocks.

    I still think NLST and HyperCloud have great potential IF everything works out well. But will it? And when?

    Here is a “top 10″ list of my concerns regarding NLST:

    1) Is HyperCloud truly the huge technological advance that the “hype” has claimed it is?

    2) Does HyperCloud work exactly as promised, or will tests by OEMs require adjustments and delays?

    3) How long will it take for OEMs to finish testing HyperCloud, and will they approve it? (Netlist Investor Relations doesn’t know.)

    4) How long will it take for HyperCloud to receive full certification? (IR doesn’t know.)

    5) How long will it take for OEM’s to place big orders and for NLST to start mass production? (IR doesn’t know.)

    6) How long will it take for NLST to start receiving big sales and big payments? (NLST “thinks” 6 months, but based on all of the unknowns, I think 6 months is just a guess, and it could take longer.)

    7) How long will it take for the lawsuits between NLST and GOOG, and NLST and Inphi, to be resolved (and will NLST win, or benefit from, these lawsuits)? Nobody knows.

    8) To BILL: how will the above lawsuits be affected by a)GOOG’s buying of MetaRAMS’ patents, and b)MetaRAM’s and NLST’s cross-licensing of patents? (IR doesn’t know.)

    9) What are NLST’s plans for their recent $30 million S-3 “shelf” filing, and do these plans include protection against a possible hostile takeover (netlist’s “poison pill” idea) or premature takeover (my idea)? (IR doesn’t know.)

    10) Will NLST be able to become a successful company on its own (after struggling for 3 years since its IPO), or will NLST get bought out and merge into a much larger corporation? No one knows.

    I want to emphasize that these are my concerns and understandings regarding NLST. Anyone is free to call Netlist Investor Relations’ Ms. Jill Bertotti at (949)474-4300, or to email her at jill@allencaron.com, and ask her your own questions.

    The are no doubt additional unknowns and concerns regarding NLST — but these 10 alone seem likely to make an investment in NLST take 6 months or longer to significantly pay off.

    For example, if the economic recovery in the U.S. and the world is slower than expected, or suffers a serious setback, tech spending on products like HyperCloud will likely be lower and slower.

    I still wish NLST (and NLST investors) all the best, and I will watch to see if it eventually takes off (in price and performance), but I won’t buy it again unless and until it proves itself to be growing quickly and steadily.

    BILL, thanks for starting this very helpful thread, and for your great discoveries and comments.

    netlist, thanks for your especially useful information, prompt replies and thorough comments, many links, and thinking through of implications.

    And thanks to everyone who commented and contributed to this thread.

    joeq, I hope my reply helps. Maybe others can also answer your questions. I wish you the best.

    Best regards everyone! Maybe I’ll see you later. Hope you have a healthy, happy, and prosperous new year!

  98. oops! On number 9 of the above list, I meant to type “premature buy-out” not “premature takeover”.

    Thanks for the smile, Bill. :>)

  99. Just 5 more (I promise) :>)

    11) When will JEDEC decide whether or not to adopt NLST’s IP and HyperCloud as the industry standard — and how will NLST be affected either way?

    12) How well will NLST compete against much bigger and richer competitors (like CSCO)?

    13) Are tiny NLST’s production capacities too small to keep up with a potentially huge demand by giant companies like HP, DELL, and IBM?

    14) Does HyperCloud truly have a competitive edge over other products, and if so, how long and how much will it be profitable for NLST?

    15) How long will it take before technological innovations by other companies advance ahead of NLST’s HyperCloud?

    OK, I’m done. Good luck all!

  100. Thanks, joeq and netlistfan.

    Great questions, and a lot to think about. The issues involving netlist, metaram, and Google here arethe type that affect many tech companies. Can the small startup survive to become a large one? How can innovation in technology, market pressures, standards bodies, and a need for that innovative technology influence potentially shape our futures?

    To BILL: how will the above lawsuits be affected by a)GOOG’s buying of MetaRAMS’ patents, and b)MetaRAM’s and NLST’s cross-licensing of patents? (IR doesn’t know.)

    I’m not sure in this particular instance, between these particular parties. I’m not sure that I’ve seen Google purchase patents from another company before in what might be characterized as a defensive maneuver, if that is what in fact took place. That’s why I wrote about it in the first place. The cross-licensing of patent processes between Netlist and MetaRAM was a surprise as well.

    I don’t have any stocks from any of the companies involved, but I think there are some pretty large implications behind what happens between the companies involved for large scale data centers, and search providers like Google. I’ll be following along, and very thankful for all of the sharing of information within the comments on this thread.

    I hope you all have a wonderful new year as well. Thanks, again. :)

  101. Note GOOG does not have the patent that MetaRAM was suing NLST with.

    So the MOST overlapping patent that MetaRAM could think of is now licensed by NLST.

    Even if GOOG were to license NLST patents now, it would not undo years of infringement (and treble damages if wilful).

  102. quote:
    Today’s call was very depressing. I have accumulated quite a bit and was hoping to hear about OEM
    qualifications in today’s Needham talk. All I heard was 6 months to revenue and no concrete OEM
    announcement nor any talk of lawsuit settlement.
    Netlist and others – Can you help me understand why it takes so long for qualification ? Also are
    the lawsuits preventing quick adoption of hypercloud ?
    Seems to have become hype-o cloud from hypercloud !

    NLST yahoo board:
    http://messages.yahoo.com/?action=q&board=nlst

    Many people have said on that board that OEM qualification does take time – maybe others can shed light on whether 3-6 months is normal.

    Here is an overview of Needham presentation:
    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=12087&mid=12087&tof=1&frt=2#12087

  103. netlist provided lots of good info in the last link to the NLST Yahoo Board that he posted just above. Scrolling far down on that Yahoo Board thread, the following opinion by “herbieray20…” that NLST might hit $2 or $3 and take a full year to take off seemed worth posting here.

    netlist rightly replied that NLST’s share price might jump earlier than that, and by a lot, if, for example, the lawsuits settle sooner and favorably.

    Lots of scenarios are possible, but most current opinions state that it will most likely take 6 to 12 months for NLST to take off in a sustained way.

    Here’s the quote to consider:

    “This [Netlist's HyperCloud] will take at least a year to bear fruit, with lots of ups and downs in the stock price.

    As a retired engineer in the server area, I have lots of experience with product development cycles, evaluation of chips sets, etc. I would be very surprised if this product has tangible effects on revenues/profits for at least 4 quarters, and then who knows what the competition will have done..?

    My prediction for the next year is consolidation between $2 and $3 at best.

    Just my opinion.”

    [by herbieray20... on NLST Yahoo Board, 1-16-10]

  104. NLST has answered Inphi’s complaint in Inphi vs. NLST.

    Among the usual boilerplate, one comment sticks out. NLST claims that Inphi cannot claim “injunctive relief”, because Inphi is subject to a “compulsory reasonable and non-discriminatory (RAND) license requirement pursuant to Inphi’s membership in JEDEC and activities therein in connection with these patents”.

  105. Netlist,
    Based on your statement,

    Among the usual boilerplate, one comment sticks out. NLST claims that Inphi cannot claim “injunctive relief”, because Inphi is subject to a “compulsory reasonable and non-discriminatory (RAND) license requirement pursuant to Inphi’s membership in JEDEC and activities therein in connection with these patents”.

    Does this mean that Netlist is infringing Inphi patents in its product and trying to use JEDEC as shield ? Clever move by netlist. Wonder if they infringe other JEDEC patents.

  106. NLST sued Inphi (NLST vs. Inphi) and later Inphi retaliated with Inphi vs. NLST.

    Inphi is lacking any serious IP in the area. Their retaliatory suit has variously been reported as “frivolous”.

    I posted that to support that general impression that Inphi’s suit is a kneejerk suit crafted without thought.

  107. Netlist today announced that the United States Patent and Trademark Office issued to Netlist Patent No. 7,636,274 for its invention related to memory load isolation and memory rank multiplication, and Patent No. 7,619,912 for its invention related to memory rank multiplication.

  108. Hi McDee,

    Thanks for citing those. From what I understand, they were announced because they have something to do with Netlist’s HyperCloud memory modules.

    I didn’t do a rundown of Netlist patents here, but I looked through a number of them. These weren’t patents that were just published, but they are fairly recent. The newest of the two was granted in December. The press release, from January 19th, tells us:

    IRVINE, Calif., Jan. 19 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that the United States Patent and Trademark Office issued to Netlist Patent No. 7,636,274 for its invention related to memory load isolation and memory rank multiplication, and Patent No. 7,619,912 for its invention related to memory rank multiplication. These fundamental technologies are integral to Netlist’s Hypercloud product line which maximizes server utilization by removing memory capacity and bandwidth bottlenecks, thereby improving datacenter performance.

    “The issuance of the ‘912 and ‘274 patents further reinforces the innovations Netlist is delivering to the market with highly differentiated products,” said C.K. Hong, President and CEO of Netlist. “These are foundational patents and with our robust portfolio of intellectual property, we can uniquely address the system challenges our customers face in the datacenter.”

  109. In GOOG vs. NLST (now consolidated with NLST vs. GOOG at request of both GOOG and NLST), GOOG sacks whole legal team of Fish and Richardson.

    From filing dated Jan 21, 2010:

    Please take notice that plaintiff GOOGLE INC., hereby substitutes Timothy T. Scott, Geoffrey M. Ezgar, and Leo Spooner III of the law firm of King & Spalding LLP as attorneys of record in the place and stead of David J. Miclean, Howard G. Pollack, Jason W. Wolff, Juanita
    R. Brooks, Robert J. Kent, Jr. and Shelley K. Mack of the law firm of Fish & Richardson, located at 12390 El Camino Real, San Diego, CA 92130 and 500 Arguello Street, Suite 500,
    Redwood City, CA 94063.

  110. What is interesting is that the new law firm King & Spalding is NOT KNOWN for patent or intellectual property litigation.

    That is, they are not known for being “trial lawyers” or “intellectual property” lawyers, but are considered #2 in country for arbitration (yes, ARBITRATION) !

    If you look at their practices:

    http://www.kslaw.com/portal/server.pt?space=KSPublicRedirect&control=KSPublicRedirect&CommunityId=227&ui_pa_sort=group&ui_pa_display=

    They surely DO have practice (like all large firms) in:
    – Licensing
    – Patents
    – Trade Secrets & Non-Compete Litigation
    – Mergers & Acquisitions

    HOWEVER, they are not a small tight outfit that just deals with “intellectual property” or patent defence.

    If you have all the money in the world (GOOG) to protect yourself in an IP-related lawsuit, you would get the best lawyers for that (if you were intending to contest on IP grounds).

    However if you were thinking of getting the best deal – you would get the best company in arbitration.

    In terms of rankings they are ranked VERY HIGH in arbitration, but are not even MENTIONED in rankings for patent or intellectual property litigation:

    http://www.kslaw.com/portal/server.pt?space=KSPublicRedirect&control=KSPublicRedirect&PressReleaseId=3375
    King & Spalding Lawyers Earn 26 Rankings As Leaders In Their Fields and 18 Practice Areas Recognized In Chambers Global 2009
    04 Mar 2009

    Historically not exactly famous for patent litigation either:

    http://en.wikipedia.org/wiki/King_&_Spalding

    Notable Mandates

    * Counseled Sprint Corp. in its sale of Sprint Publishing & Advertising, the directory publishing business to RH Donnelly Corp. for $2.23 billion. The transaction was announced in 2002 and closed in 2003.
    * Represented JDN Realty Corp. in its $1.02 billion sale to Developers Diversified Realty Corp. for a combination of cash and stock. The deal closed in 2003.
    * Advised Credit Suisse First Boston as financial adviser to Graphic Packaging in its $3 billion merger with fellow forestry and paper company Riverwood Holding in 2003.
    * Represented Caremark Rx in its $6 billion merger with AdvancePCS in 2004.
    * Counseled Lockheed Martin in its $2.4 billion acquisition of Titan Corp., in a mixed cash and stock offer which closed in 2004.
    * Advised SunTrust Bank in its $6.98 billion purchase of National Commerce Financial Corporation in 2004.
    * Legal counsel to Novelis, a Canadian-based aluminum company in its purchase by Hindalco Industries Ltd., an Indian steel company for total consideration of $6 billion. The transaction closed in 2007.

    Link for #2 ranking for arbitration:

    http://www.kslaw.com/portal/server.pt?space=KSPublicRedirect&control=KSPublicRedirect&PressReleaseId=3491
    King & Spalding Earns No. 2 Spot in 2009 Arbitration Scorecard
    26 Jun 2009

    NEW YORK, June 26, 2009—King & Spalding, a leading international law firm, earned the No. 2 spot in Focus Europe’s 2009 Arbitration Scorecard, a worldwide ranking of law firms by number and size of arbitrations. The rankings were published in the summer 2009 issue of Focus Europe, an annual supplement to The American Lawyer.

    Focus Europe noted that King & Spalding is among “the first tier of arbitration law firms.” The firm appeared as arbitration counsel in a total of 25 arbitrations included in the 2009 Arbitration Scorecard.

    King & Spalding was also included in Focus Europe’s list of Twelve Big Awards for its representation of three of the listed awards: Azurix Corp. v. Argentine Republic ($165 million), Sempra Energy International Co. v. Argentine Republic ($128 million) and Enron Creditors Recovery Corp. and Ponderosa Assets, LP v. Argentine Republic ($106 million).

    The 2009 Arbitration Scorecard covers international arbitrations (not limited to Europe) that were active in the years 2007 and 2008. It is based on nearly 250 cases—all either commercial disputes with stakes of at least $500 million or treaty disputes with stakes of at least $100 million.

    Among the survey’s list of disputes, King & Spalding served as claimant’s counsel in one investment treaty arbitration and three contract arbitrations in which at least $1 billion was in controversy.

    King & Spalding is ranked among the leading international arbitration practices in the world. Chambers USA 2009 says, “This powerhouse continues to impress with its international arbitration practice, attracting praise for its depth of knowledge and client service,” an accolade that echoes from the publication’s 2008 edition, which described the firm as “currently one of the arbitration arena’s biggest success stories.” King & Spalding was nominated for a Chambers USA Award for Excellence 2009 in international arbitration and was a finalist in 2008. It also features among the world’s leading international arbitration practices in Chambers Global 2009. And the 2009 edition of The Legal 500: US describes King & Spalding’s international arbitration team as “”simply terrific.”

    About King & Spalding
    King & Spalding is an international law firm with more than 880 lawyers in Abu Dhabi, Atlanta, Austin, Charlotte, Dubai, Frankfurt, Houston, London, New York, Riyadh (affiliated office), San Francisco, Silicon Valley and Washington, D.C. The firm represents half of the Fortune 100 and in Corporate Counsel surveys consistently has been among the top firms representing Fortune 250 companies. For additional information, visit http://www.kslaw.com/.

  111. From an interview of GOOG’s NEW new lead lawyer (Timothy Scott):

    http://apps.kslaw.com/Library/publication/Zimmer%20Scott%20Met%20Corp%20Counsel%20Jan%202010.%20pdf.pdf
    Top Litigators Manage Firm’s California Of?ces
    Page 32 The Metropolitan Corporate Counsel January 2010
    The Editor interviews Timothy T. Scott
    and Donald F. “Fritz” Zimmer, Jr.,
    King & Spalding LLP.


    Editor: To what extent has the cost of e-discovery contributed to the increase in litigation expense?

    Scott: You can’t even litigate a simple thing without the discovery cost dwarf- ing everything else in the case. If a com- plaint in a securities class action case survives a motion to dismiss, the cost of collecting and reviewing all the elec- tronically stored data creates an impetus to settle the case before even getting to the merits in order to avoid the cost of e- discovery.

    Zimmer: The invention of email has done more to bene?t plaintiffs’ counsel than any other development of the last 20 years. I have colleagues on the plain- tiffs’ side of the bar who tell me they thank their lucky stars that email was invented.

    Not only has GOOG suffered from discovery (on hardware side) – by having to reveal it’s GOOG server to NLST (and thus proving use of “Mode C” in GOOG servers).

    It will now have to contend with NLST riffling through GOOG e-mails as well – as the trail is examined of who said what at GOOG and when they knew it.

    The trial will examine the role of GOOG employees (mentioned in earlier court dockets and posted some days back – see above):

    Rick Roy – “involved in the development of the accused 4-rank FBDIMMs and who participated in meetings with Netlist concerning it’s patented technology”

    Andrew Dorsey – same as above

    Rob Sprinkle – same as above

    And god knows what else THAT “discovery” of GOOG internal e-mails will reveal.

    The situation is strongly in favor of GOOG settling the case.

    – for reasons mentioned above i.e. legal issues and “discovery” problems for GOOG (a loss will also not help their “do no evil” image – and image is essential for GOOG i.e. consumer trust since that is part of the GOOG business model.

    – for reasons that alternatives to NLST are at a standstill.

    Alternatives to NLST – there are none so far.

    MetaRAM is out of business (NLST now licensee of patent MetaRAM hoped to use against NLST).

    Inphi which owns no IP in this area and was just hoping to sell a buffer chip is embroiled in legal dispute with NLST.

    Meanwhile memory module makers like Micron are waiting for JEDEC to arrive at a standard so they can start moving forward. Inphi is also awaiting that, so memory module makers will use it’s buffer chip (now that MetaRAM – who was earlier partnered with many memory module makers – is gone).

    But while NLST/GOOG dispute (being the most prominent) is not resolved, and the licensing status of the infringing of IP in JEDEC proposed standard (like JEDEC FBDIMM “Mode C” proposed standard) are not clear, JEDEC cannot move forward with standardizing (since that will benefit NLST as IT’S IP is made into standard so many people can start doing that – meaning more infringers and people to collect damages from by NLST).

    In any case JEDEC procedure is to see that they not infringe proprietary stuff – and if it does to negotiate licenses itself (or by it’s members) to allow the standard to move forward. After all the creation of “the standard” is to encourage standardization – which will lead to lower overall costs to it’s members. JEDEC cannot blindly adopt something as standard that still has IP and licensing issues unresolved.

    For this reason – we will see a DROUGHT of memory in this space. NLST being the only unencumbered player – both as creator of the memory, and the manufacturer will be in an unenviable position – as there is no other player who can deliver what NLST can deliver.

    Plus it is not like NLST HyperCloud is a totally new form factor – it is plug and play and requires no modification to BIOS. This means it is a “no brainer” for an OEM server manufacturer to incorporate NLST HyperCloud since nothing else is available and there is no “cost” to doing this (i.e. “how can we lose”).

    In addition, all this is timed to coincide with the much reported server upgrade cycle (since there is a lot of pent up demand as there were fewer upgrades/purchases in last 2 years due to economic uncertainty and the upgrade cycle is now beginning – memory price improvement etc.).

    And you have OEMs in a crunch – they cannot avoid using NLST.

    Meanwhile memory module makers will be getting impatient. As they will miss the upgrade cycle (at least in this area of data center upgrade/cloud computing expansion). They will be under pressure to negotiate some licensing deals with NLST.

    Note that while many memory module makers have done deals with MetaRAM in the past, they have NOT been prosecuted by NLST (partly to limit it’s legal expenses perhaps – and partly because these people are all potential customers).

    GOOG also will perhaps also be under the most pressure – with ever expanding hardware needs (GOOG being a big user of memory-loaded systems – for which the NLST HyperCloud solution is most appropriate) GOOG will be in a crunch as well, if it cannot upgrade it’s systems for lack of non-infringing solutions.

    In addition, note that GOOG – for the possibility of wilful infringement (since GOOG had discussed with NLST – then went ahead and violated NLST IP), could face treble damages in court (if case goes through).

    So pressure is on GOOG – to settle. But because memory/server expansion is such a big part of it’s business, GOOG loses every month that it delays – every month that standard/legal memory modules are NOT available to sate the growth nees of GOOG server expansion.

    So the time clock is clicking for most of these players, and that makes the GOOG vs. NLST/NLST vs. GOOG cases unlike a traditional IP infringement suit – since there are time issues as well which are NOT in favour of GOOG.

  112. Hi Netlist,

    Thanks for the updates and observations on the legal representation in the Google vs. Netlist litigation. There do seem to be some factors involved that point more towards a settlement than prolonged litigation. I guess we wait and see.

  113. Netlist,
    Brilliant analysis. Maybe there will be some money after all. Any thoughts on Google settlement in terms of
    dollars ? How much can we expect ? Do you think Inphy will also settle and any guess on how many dollars
    can we expect out of them ?

  114. I do not know what the difference is between GOOG using 4-rank (for which “Mode C” is a smoking gun) and the other memory module makers who are making 4-rank memory. Whether they are violating NLST IP as well.

    It is possible that they are – except that NLST has chosen to not fight them right now – and has gone against GOOG first (low legal resources and also that the memory module makers could be allies later).

  115. quote:
    Maybe there will be some money after all. Any thoughts on Google settlement in terms of dollars ? How much can we expect ? Do you think Inphy will also settle and any guess on how many dollars

    Trying to pin down the knowns – and keeping in mind the constraints i.e. like what we know of GOOG psychology, their business model and how they hope to behave to retain customer trust etc. ..

    My guess is as part of settlement, GOOG will want no attribution of guilt for starters (to avoid pollution of “do no evil” motto). To achieve that they will be willing to concede in other areas i.e. monetarily.

    GOOG can pay and walk away. But situation is not that simple – there is a reason it was infringing NLST IP – this is exactly what GOOG needs for it’s servers.

    NLST HyperCloud is designed precisely for GOOG type situations (i.e. increases speed for memory-loaded servers – apart from the cost and power advantages).

    So GOOG has to make sure that it can negotiate a path for itself as well (so GOOG servers are not shut down). So maybe the carrot will be a contract for use, or licensing terms to protect existing GOOG usage.

    Because of the constraints above, there will have to be a transition from acrimonious to congenial. GOOG knows it can’t just walk away from NLST even after throwing money at it – it will have to buy memory or license from NLST in the future even if they were not personally in litigation.

    Therefore I suspect a change in attitude at GOOG – the change in law firm already changes the faces that NLST lawyers meet – thereby allowing discussion in a different direction (as I posted above, the new GOOG law firm is #2 in country for ARBITRATION, and not particularly famous for IP litigation).

    The effect of that will be multiplicative for NLST – concession from GOOG will be validating for NLST. And GOOG may understand that it has that value just by acknowledging validity of NLST IP – is a signal to other players to fall in line (if GOOG the gorilla is acknowleding NLST IP validity).

    I would not expect GOOG to take a share in the company – since insiders maybe careful at this stage. Plus they may need to remain neutral in order to be a trusted supplier to whole range of consumers (which include many cloud computing competitors of GOOG).

    Regarding Inphi – maybe they will settle for a small payment. They probably haven’t sold that many buffer chips (which would only have been used if JEDEC finaled the standard). So maybe there won’t be any great damages.

    Don’t see any real synergies between Inphi/NLST, so maybe a simple cash payment or a slap on the wrist.

    Inphi holds no IP in this area, yet was trying to step into MetaRAM’s shoes after that company went bankrupt. MetaRAM was the darling of Intel and other memory module makers – who were using their buffer chips. Now Inphi was hoping to do the same (except without any IP) – mainly banking on JEDEC/module-makers to deal with the IP issues. However NLST didn’t go after those, but went directly for Inphi for IP infringement.

    Argument for early GOOG/NLST settlement:

    GOOG also will understand certain inherent advantages with an early settlement.

    However superficially one would not expect the settlement to occur much before the 3-6 months for OEM qualification (Needham conference audio) since GOOG knows NLST will not be manufacturing in volume until then – so no hurry.

    On the other hand, there maybe a whole process for internal qualification at GOOG which does a lot of custom solutions within GOOG – and they may want to “join the program” earlier so they can also give feedback to NLST (along with the other OEMs like HP, DELL).

    This type of thinking would suggest a much earlier settlement, where early resolution is beneficial to GOOG, rather than delaying settlement (achieves nothing – have to still pay, and are in worse negotiating position, and are behind in OEM qualification roadmap).

    In any case, GOOG founders may be of the opinion that to “keep it simple” – i.e. if it IS decided that they have to settle eventually, then to settle EARLY (and remove the distraction), and instead use the time to forge new relationship with NLST and to get in early with qualification of the new memory.

    If this reading is correct, we may see a settlement far earlier than the 3-6 OEM qualification period.

    Some comments on eventual JEDEC/NLST negotiations:

    JEDEC is waiting for legal clarification, since it’s proposed standard falls awry of NLST IP. Since JEDEC standard is meant to make things easier for manufacturers, they would require favorable licensing terms from NLST before they could finalize the standard (and advocate it to manufacturers).

    Since GOOG is the bigger player (and the decision is influential for others), I doubt NLST would bother dealing with a JEDEC deal before the GOOG deal.

    After GOOG/NLST resolution, we may see JEDEC negotiating for reasonable terms of licensing with NLST.

    Since NLST memory is plug and play and requires no BIOS updates, there is LESS need by OEMs for JEDEC standardization for this. In fact GOOG and others will not need JEDEC approval to start using NLST memory. This would have been different if it required changes to BIOS, motherboard – in that case there would be a need to have some standardization about how those changes should be made.

    But as a consumer of memory for memory-loaded servers (where NLST HyperCloud works best), GOOG would WANT NLST IP to be licensed by JEDEC/module-makers so there are many manufacturers and prices go down on this technology. Of course, this would be the (JEDEC/RMBS-like) “royalty-based” model that CEO Hong mentioned in the Needham conference audio:

    quote:
    we have strong IP which create competitive barriers as well as provide future avenues for a royalty based business model

    Since all these matters ARE interrelated – for instance GOOG settlement with NLST suddenly puts NLST in a strong spot – knowing this, GOOG may try to combine the GOOG settlement with JEDEC-licensing negotiations. While radical, this would be the sort of thing GOOG could do. Gives it some street cred, plus it is beneficial to GOOG in long run which is an avid consumer of memory for servers.

    Allied NLST IP like “embedded passives”:

    GOOG/NLST settlement raises other questions – what will become of the 4-rank stuff that memory module makers have been making for some time. Is that all a violation of NLST IP as well ? Were a lot of those 4-rank modules sold before ? Settlement would involve forgiving or getting compensation for all the other NLST IP that has been used by others.

    However, so far NLST has been careful to avoid litigating too many cases – the seem to have gone after MetaRAM, GOOG and Inphi – i.e. the core players making the memory, or influential in what happens.

    If JEDEC were to license NLST IP to JEDEC/memory-module-makers, they would probably need to license more than just the core IP, since to do it as well as NLST they may require the allied IP like “embedded passives” (to free up space on memory modules).

    Summary:

    So in summary, given previous post comments about time-sensitive nature for GOOG, which has ever expading server/memory-use growth, we may see a settlement far earlier than the “one day before jury trial” scenario.

    The trend by infringers to drag out cases to settle a day before jury trial (to deplete accuser’s resources) is thus not applicable here.

    And the time-sensitive nature includes not just wanting to use the memory, but also to join early so it can participate in qualification and feedback for NLST HyperCloud i.e. be part of the process early on if they ARE going to be using that memory anyway.

    And then possibly also to mobilize JEDEC/NLST licensing so future multiple sources of such memory are available for GOOG.

    Regarding Inphi, I don’t think there will be any of the “complicated relationship” issues (as between NLST/GOOG) since NLST probably doesn’t expect to be manufacturing anything through Inphi.

    Inphi is not just a buffer chip manufacturer – so the case won’t harm them too much.

    However the actual damages retreivable from Inphi may not be huge since they haven’t really sold this buffer chip that much (i.e. still only at the announcement level). Although they WERE prepping to replace MetaRAM as buffer chip of choice for memory module makers (like Micron etc.).

  116. My guess is as part of settlement, GOOG will want no attribution of guilt for starters (to avoid pollution of “do no evil” motto).

    Which I think is why Google started the litigation against Netlist with their declaratory relief action.

    NLST HyperCloud is designed precisely for GOOG type situations (i.e. increases speed for memory-loaded servers – apart from the cost and power advantages).

    They do seem like ideal clients for Netlist, with Hypercloud memory. Even though they are adversaries in court at this point in time, there is the potential for them to do business together in the future.

    Thanks for the detailed analysis, netlist.

    I do still find myself puzzled by Google’s purchase of MetaRAM’s patents, and what they might do with them in the future.

  117. quote:
    I do still find myself puzzled by Google’s purchase of MetaRAM’s patents, and what they might do with them in the future.

    As an NLST shareholder, I would be happy to see GOOG transfer that IP to NLST. Although much of it may not be valuable (like IP on stacked memory which NLST has criticized for asymmetrical data lines etc.).

    A related question is what GOOG intends to do with the internal hardware division – or at least the sub-section that was involved with development of the “internal” (don’t know who actually manufactured that for GOOG) infringing memory modules that GOOG is using.

    If GOOG intends to keep that division they may need some IP like MetaRAMs for the future (at least to mount “retaliatory lawsuits”).

  118. Hi netlist,

    I suspect that Google wants to maintain their own independent ability to develop and manufacture hardware for their own internal uses. It is possible that’s the reason why the acquisition of the patents, but it was still a surprise to see. I’m not sure that they would transfer the MetaRAM IP over to Netlist, but I’ll try to keep my eyes open in case it happens.

  119. Yes, makes sense. Although GOOG’s efforts for hardware are hard to gauge. We don’t even know how GOOG made those memory modules, and in what number, or if MetaRAM was involved with that (can’t be if MetaRAM says only made $37,000 worth and destroyed them at that).

    http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/
    How Google Works
    By David F. Carr
    2006-07-06

    quote:
    Google runs on hundreds of thousands of servers—by one estimate, in excess of 450,000—racked up in thousands of clusters in dozens of data centers around the world.

    And this is from 2006. But as the other paper you posted above:
    DRAM Errors in the Wild: A Large-Scale Field Study (pdf)
    http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf

    suggests they may have variety of hardware perhaps.

    http://en.wikipedia.org/wiki/Google_platform
    Current hardware
    Servers are commodity-class x86 PCs running customized versions of Linux. The goal is to purchase CPU generations that offer the best performance per dollar, not absolute performance.[7] Estimates of the power required for over 450,000 servers range upwards of 20 megawatts, which cost on the order of US$2 million per month in electricity charges. The combined processing power of these servers might reach from 20 to 100 petaflops.

    Here’s an article on a GOOG server:

    http://news.cnet.com/8301-1001_3-10209580-92.html
    April 1, 2009 2:26 PM PDT
    Google uncloaks once-secret server
    by Stephen Shankland

    This article suggests GOOG builds a battery into it’s server – and may thus avoid separate UPS costs (i.e. can tolerate interruption before a generator is started).

    The idea of using a battery is what many may have thought of – except GOOG has done it (because there is a critical mass of such people there – as soon as someone proposed it, there would be many who would immediately warm up to the idea – as opposed to a more conventional company).

    As the article states – the loss of efficiency in conversion is important – as it directly impacts the heating that has to be managed with air conditioning etc. then.

    They have also simplified the have motherboard (or possibly the power supply as the comments suggest) to do the 12V to 5V conversion (also required by motherboards from power supplies usually) – and this simplifies the use of the single voltage i.e. 12V battery.

    quote:
    The Google server was 3.5 inches thick–2U, or 2 rack units, in data center parlance. It had two processors, two hard drives, and eight memory slots mounted on a motherboard built by Gigabyte. Google uses x86 processors from both AMD and Intel, Jai said, and Google uses the battery design on its network equipment, too.

    The comments suggest the motherboard is:
    http://www.gigabyte.com.tw/Products/Networking/Products_Spec.aspx?ProductID=1075&ProductName=GA-9IVDT

    Which on the face of it would only support up to 12GB of DDR2 400MHz memory.

    One thing to note though – they DO have all the memory slots in use – though that would make sense from an economic standpoint i.e. get least dense memory module (cheapest) and populate all the slots.

    GOOG infrastructure is designed for fault tolerance and motherboard failure – it is possibly they are also designed for server variation. If so, there is no real indication that new servers being installed are not using more memory than this server that was revealed.

    After all the server GOOG showed in discovery for GOOG vs. NLST WAS infringing NLST IP (using “Mode C” and by implication 4-rank memory). It could be since GOOG had chosen to not deny that they were using “Mode C”, they thus chose to reveal a server demonstrating that as well – to simplify the process of eventual settlement and arbitration.

    One reason GOOG could use the 4-rank memory despite not having systems that are heavily memory loaded (if 8-12GB still runs at top speed) could be reduced power consumption and ability to use cheaper memory chips. However would the manufacturing of such custom memory not be expensive as well (compared to a mass producer of such memory ?).

    The article is dated April 1, 2009 – the author confirms that it was not an April Fool’s article.

  120. Hi netlist,

    Informative articles, especially the CNET one from April 1st.

    There is an Exaflop patent (Exaflop shares the same address as Google on its patent submissions), Data center uninterruptible power distribution architecture, which includes the use of a 12 Volt lead acid battery in the event of power failure.

    The patent looks like it might describe an earlier generation of Google’s use of a 12 volt battery for each server. A number of the many granted patents and patent applications assigned to Exaflop mention the use of a 12 volt battery.

    Google also has a few granted and pending patent filings on motherboard cooling systems, a modular data center, and other data center approaches (including a water-based data center).

    But I haven’t seen any published patent filings from them (other than the MetaRAM assigned ones) that focus upon memory.

  121. NLST officially recognizes settlement with MetaRAM.

    I wonder why the delay – is it because they have to wait for the final approval by court to appear ?

    The other alternative – that NLST is savvy about holding back on news and posting it (like the previous 2 patents) at a time when stock is being manipulated down by market markers etc. If so, that would be interesting – and the opposite of what some companies wind up doing – i.e. screwing shareholders. With insiders owning 50% plus of NLST, that is perhaps one of the advantages – that management is better aligned with shareholder interest.

    Stock price movements may not harm stocks in the long run, but they scare out many shareholders – leading to shareholder churn and (at least on stock bulletin boards) an absence of long time holders. So in that sense at least it helps if a company’s stock price does not move up/down that much (or manipulated down by market makers during a lull period).

    http://finance.yahoo.com/news/Netlist-Announces-Settlement-prnews-1484777084.html?x=0&.v=1
    Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
    Press Release Source: Netlist, Inc. On Thursday January 28, 2010, 1:25 pm EST

  122. Hi Netlist,

    It’s quite possible that they waited because they wanted to get legal filings out of the way, and a final settlement order from the two Courts involved. Making an announcement in a timely fashion after legal requirements were fulfilled would make it less likely to be perceived that they were announcing news in an effort to manipulate stock prices.

  123. GOOG’s attorneys King & Spalding add some IP litigation attorneys to the team:

    01/27/2010 90 MOTION for leave to appear in Pro Hac Vice Mark H. Francis ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)

    01/27/2010 91 MOTION for leave to appear in Pro Hac Vice for Daniel Miller ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)

    01/27/2010 92 MOTION for leave to appear in Pro Hac Vice for Scott T. Weingaertner ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)

    01/27/2010 93 MOTION for leave to appear in Pro Hac Vice for Susan Kim ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)

    01/27/2010 94 MOTION for leave to appear in Pro Hac Vice for Allison Altersohn ( Filing fee $ 210, receipt number 44611004730.) filed by Google Inc.. (Attachments: # 1 Proposed Order)(jlm, COURT STAFF) (Filed on 1/27/2010) (Entered: 01/28/2010)

    It seems Scott Weingaertner is the significant attorney with expertise in “employee trade secret misappropriation”:

    http://www.marketwire.com/press-release/King-Spaldings-Growth-Continues-in-New-York-760126.htm
    SOURCE: King & Spalding
    Aug 13, 2007 12:02 ET
    King & Spalding’s Growth Continues in New York
    Weingaertner focuses on intellectual property litigation and counseling with particular experience handling disputes regarding patent infringement, licenses and employee trade secret misappropriation, as well as patent interferences and ex parte procedures before the U.S. Patent and Trademark Office. He is well versed in the technology areas of semiconductors and other electronics, computer software, medical and other mechanical devices, and financial services. He earned S.B. and S.M. degrees from the Massachusetts Institute of Technology, and a J.D. from the University of Pennsylvania.

  124. GOOG attorneys King & Spalding probably needed some IP attorneys – that is understandable.

    The original reading still stands – that if King & Spalding is unranked for IP litigation, but #2 for arbitration – it seems likely that it was the #2 part which brought them to attention of GOOG.

    This because Fish & Richardson (which they dumped) is already a respected law firm for IP litigation.

    In any case, GOOG may not have liked the direction in which things were going – or the previous prosecution pattern of previous attorneys, possibly moving to a new tack (with new faces).

  125. This article gives the general sense of the situation – Fish & Richardson was ideal for IP litigation, while King & Spalding for “general matters”.

    Fish & Richardson (GOOG’s previous attorneys) is consistently rated among top 2 in overall as well as “patent prosecution”. While King & Spalding is #13 in “overall category”, and not even listed in top 30 for “patent prosecution”:

    http://www.law.com/jsp/iplawandbusiness/PubArticleIPLB.jsp?id=1202437741766
    or
    http://www.slwip.com/about/whats_new/documents/2010Top10PatentProsecution.pdf

    The Guardians
    Which law firms do the country’s biggest corporations turn to when they need help obtaining, asserting, and defending their valuable intellectual property?
    By Erik Sherman
    IP Law & Business
    December 01, 2009

    The Big List

    On the surface, there is a lot of consistency in how widely the top companies spread their work. Consider that the 36 firms included in the overall ranking that, along with our patent prosecution and IP litigation rankings, appears here were mentioned by companies at least five times for doing either prosecution or litigation work. But only five firms—Baker Botts, Fish & Richardson, Foley & Lardner, K&L Gates, and Greenberg Traurig—got enough mentions to also qualify for spots on our prosecution and litigation lists.

    With two exceptions, no firm got more than three mentions from companies in a single industry. The exceptions: Baker Botts and Fish & Richardson (a finalist in this year’s IP Litigation Department of the Year contest; see “Perfecting the Art of War.” ). Both were named by multiple high-tech and/or telecommunications companies.

    Fish did litigation and prosecution work for Apple Inc., H-P, and Intel Corporation, and litigation for Microsoft Corporation.

    By contrast, the top firm with the most diverse docket was King & Spalding, whose seven mentions came from seven different clients, each of them in a different industry. For example, the firm did litigation work for The Coca-Cola Company (beverage), Chevron Corporation (energy), and International Business Machines Corporation (technology), and prosecution for General Electric Company (diversified financials), The Procter & Gamble Company (household and personal products), Citigroup Inc. (financial services), and Costco Wholesale Corporation (retail).

    With eight and six mentions, respectively, two of the top litigation firms—Fish & Richardson and IP Litigation Department of the Year winner Quinn Emanuel Urquhart Oliver & Hedges (see “What Rhymes with Win?” ) had four clients between the tech and telecom sectors. Compare that to Wilmer and King & Spalding, with four mentions spread across four different industries. When it comes to litigation, high-tech companies and telecoms stand out, with top industry players using 15 out of the 18 firms to rack up at least four mentions. Given that, between them, these companies account for only 12 percent of the 100 biggest companies, the fact that they hired so many top litigation firms is certainly noteworthy. Is it any wonder that technology companies—frequent targets of so-called patent troll infringement claims—have been a driving force in the push to reform the nation’s patent system?

    The Prosecution List

    While it may not be as lucrative as litigation, patent prosecution work can be plentiful. Consider that in 2008, Fortune 100 corporations collectively received well over 21,000 patents, according to figures from the Intellectual Property Owners Association and the Patent and Trademark Office.

    So who’s doing the bulk of that work? Thirty firms earned at least four mentions. At the top of the list, there is little overlap with the top litigation shops. Only three firms—Baker Botts, Fish & Richardson, and K&L Gates—climbed into the top four on both lists.

    http://www.law.com/jsp/iplawandbusiness/PubArticleIPLB.jsp?id=1202437199242
    The IP Litigation Department of the Year
    IP Law & Business
    December 01, 2009

    Winner: Quinn Emanuel What Rhymes with Win?
    Finalist: Fish & Richardson Perfecting the Art of War
    Finalist: Weil, Gotshal & Manges Tried and True
    Finalist: Winston & Strawn The Net Effect

    http://www.fr.com/news/2010/january/americanlawyer.pdf
    The Fish docket is mostly defense cases, but
    the firm can flex its enforcement muscles. Case
    in point: Fish helped Callaway Golf Company
    win an injunction blocking the sale of Acushnet
    Company’s Titleist Pro V1, which generated
    $1.9 billion in sales in 2008. While that win
    was sent back for a retrial due to a technical is-
    sue, Callaway GC Michael rider says he has no
    qualms about hiring Fish to handle all his pat-
    ent litigation: “They know the patent law abso-
    lutely cold, and know how to try patent cases.”

    http://www.fr.com/news/2010/january/FishIPLaw360.pdf
    Law360, New York (January 01, 2010)
    Fish & Richardson PC
    Fish & Richardson earned top spot in Law360’s IP firm rankings for its success in
    reversing over $700 million in damages awards against Microsoft Corp. and in forging
    new law concerning the fraud standard in trademark disputes.

    http://www.fr.com/news/articles.cfm?topicid=13
    Recent Wins

  126. Hi Netlist,

    The search at that address was retired a few months ago, and Matt Cutts announced on his blog in early November to Expect Caffeine after the holidays. In that post, Matt mentioned that they would be showing Caffeine results at one data center so that they could continue to test it.

    From what I have heard, Caffeine results were being shown for roughly half the visitors to the data center at IP address 209.85.225.103. It’s quite poassible that Google has rolled out Caffeine results to more data centers at this point, but we can’t be sure for certain.

  127. In re: the above article that this thread is under – no one thinks it’s too much of a coincidence that GOOG announces this Caffine Project exactly one week after NLST announces their HyperCloud? NLST comes out with something seemingly revolutionary in memory and cloud computing, and a week later GOOG announces that they’re upgrading their infrastructure code and doing an overhaul of their browser to make it faster? Something that would require a memory upgrade?

    And, as of this writing, the NASDAQ is up big, and NLST is down on very low volume. They driving it down with 100 share trades, and then buy 10 & 15K share blocks once they get it down.

    NLST is being manipulated like there’s no tomorrow.

  128. quote:
    no one thinks it’s too much of a coincidence that GOOG announces this Caffine Project exactly one week after NLST announces their HyperCloud? NLST comes out with something seemingly revolutionary in memory and cloud computing, and a week later GOOG announces that they’re upgrading their infrastructure code and doing an overhaul of their browser to make it faster? Something that would require a memory upgrade?

    Although having more memory in servers would allow GOOG to do things on a different scale, the suggestion in media or GOOG info seems to suggests an improvement in the algorithms and stuff like that. Any improvement in the hardware is not explicitly mentioned it seems.

    GOOG was using the infringing memory prior to GOOG announcement. If anything Caffeine may have been based on that memory. Thus it would have little to do with NLST’s announcement schedule.

  129. Recall that GOOG had gone to court to prevent NLST seeking to shut down GOOG servers – that case is GOOG vs. NLST.

    During discovery for GOOG vs. NLST, GOOG was forced to show a GOOG server to NLST which had “Mode C” (smoking gun for “4-rank”). That led to NLST filing case NLST vs. GOOG.

    As reported above, both GOOG and NLST asked court to consolidate the two cases because they are dealing with same memory.

    Now it seems (Feb 3, 2010) Judge Armstrong has DENIED the consolidation. She is saying the GOOG vs. NLST case is well on it’s way (with discovery on track), so why delay that.

    And to let the cases go ahead separately.

    What does this mean.

    It means the GOOG vs. NLST (which is at an advanced stage) will not be delayed.

    Note that both parties were keen to consolidate the cases.

    Here is what she says:

    quote:
    Based on that
    commonality, the parties request that the Court: (1) consolidate the cases for trial under
    Federal Rule of Civil Procedure 42(a); (2) vacate the pretrial schedule and trial date in the First
    Case in order to coordinate both cases for trial; and (3) schedule a date for a Case Management
    Conference to set a new pretrial schedule applicable to both cases.

    The Court is not convinced that the parties’ requests to consolidate and vacate the
    pretrial schedule and trial date in the First Action are either necessary or appropriate. The First
    Case based on Netlist’s decision to file a
    new action over a year after the First Case was filed, particularly given that the new action
    purportedly involves the same memory modules at issue in the First Case.

    The Court also has serious concerns regarding the potential for the instant litigation to
    expand exponentially, thereby increasing the cost to the parties and consuming an inordinate
    amount of judicial resources. Although a settlement conference has been scheduled in the First
    Case for August 3, 2010, the Court believes that it is in the parties’ mutual interest to engage in
    a settlement conference or mediation, sooner rather than later—before the parties have
    expended what likely will be a considerable amount of time and resources litigating these two
    cases.

  130. I’m pretty confident NLST is going to make another run again soon. I’m not some delusional buy and hold long who tells himself whatever he needs to hear while he keeps losing money.

    I bought the stock @ 2.46 on that 1st Friday during the run up. I could have bought much cheaper, but it wasn’t until I did the DD that I realized the implications of what they had come up with. I sold the following Monday a little under 6, and then bought back in @ the close on Tuesday @ 4.03. Sold it at the end of the week @ 6.76. Played it a few more times on momentum and spikes here and there. But when they got it down to 3.34 the other day, I had to buy back in. And I bought back in deep. Got me 30K shares @ 3.43/3.42/.

    Watching it trade the last few days, it’s really obvious that the crooks are walking it down on super low volume.

    In the meantime, I really wish NLST would offer a little forward guidance.

  131. I love this page you guys are well informed and very knowledgeable…
    Can you tell me where you review the court docs for these case’s?
    Thank you much.

  132. quote:
    Can you tell me where you review the court docs for these case’s?

    For completeness here is the link:

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=11572&mid=13025&tof=1&frt=2#13025
    Re: update on the various court cases


    By the way, anyone wanting to look for court cases can do so at:
    http://pacer.uspci.uscourts.gov/

    You need to register, but only need to pay after dues reach a certain amount (can use credit card to pay).

    Click on “Enter U.S. Party/Case Index”.
    Click on “All Court Types”

    search for netlist:
    Party Name: netlist

    The cases will be listed (though with cryptic ids) – here is a guide:

    NLST vs. Inphi:
    4 NETLIST INC. cacdce 2:2009cv06900 09/22/2009 830

    GOOG vs. NLST:
    10 NETLIST, INC. candce 4:2008cv04144 08/29/2008 830

    NLST vs. GOOG:
    13 NETLIST, INC. candce 4:2009cv05718 12/04/2009 830

    Inphi vs. NLST:
    14 NETLIST, INC. cacdce 2:2009cv08749 11/30/2009 830

    Clicking on the ID will show a page – you can view in HTML (webpage) or as pdf. View in HTML for now.

    Click on “Docket Report”.

    This will show what’s going on – and will have links for the individual dockets (judge’s ruling, filings by NLST/GOOG etc.).

  133. Another patent application assigned to Google was published at the end of January:

    Methods and Apparatus of Stacking DRAMS

    From the patent filing:

    CROSS-REFERENCE TO RELATED APPLICATIONS

    [0001]This application is a continuation of U.S. Patent Application entitled “Methods and Apparatus of Stacking DRAMs,” Ser. No. 12/055,107, filed on Ma. 25, 2008, now U.S. Pat. No. 7,599,205 issued on Oct. 6, 2009, which is a continuation of U.S. Patent Application entitled “Methods and Apparatus of Stacking DRAMs,” Ser. No. 11/515,406, filed on Sep. 1, 2006, now U.S. Pat. No. 7,379,316 issued on May 27, 2008, which in turn claims the benefit to U.S. Provisional Patent Application entitled “Methods and Apparatus of Stacking DRAMs,” Ser. No. 60/713,815, filed on Sep. 2, 2005, which are incorporated herein by reference.

    Netlist’s ‘386 patent looks like it was filed on July 1, 2005, which was a couple of months earlier than the provisional patent application.

    Not sure if any of this has any impact or significance for any pending litigation, and there is the possibility that there might be additional unpublished patent filings as well, but thought it was worth mentioning.


  134. Another patent application assigned to Google was published at the end of January:
    Methods and Apparatus of Stacking DRAMS

    Thanks.


    Netlist’s ‘386 patent looks like it was filed on July 1, 2005, which was a couple of months earlier than the provisional patent application.

    NLST claims their IP dates back to March 2004 (from court filings).

    Yes, this seems to be a MetaRAM patent that may have been in process (continuation of earlier patent).

    It says it is a continuation of this patent:
    http://www.freepatentsonline.com/7599205.pdf
    Which itself is a continuation of:
    http://www.freepatentsonline.com/7379316.pdf

    This is a long-standing patent thread at MetaRAM – for “stacking DRAMs”.

    Since GOOG is now owner of original thread, and all derivative patents, we see GOOG as direct owner. Note that the lawyer is Fish & Richardson (GOOG’s lawyer). Don’t know if Suresh Rajan is now a GOOG employee – but would make sense if MetaRAM main inventors are brought into GOOG hardware division.

    This is related to the “stacking DRAMs” stuff that MetaRAM was doing and which as I pointed out earlier NLST was critical of for it’s asymmetrical lines to memory chips (i.e. asymmetric delays along lines).

    As posted above:


    Compare NLST to MetaRAM (now bankrupt) design:

    http://www.ansoft.com/ie/Track2/DDR3%20Memory%20Module%20Design.pdf

    It shows MetaRAM was to deliver 16GB 2Rank R-DIMMs in Dec 2008 at slower 1066 MT/s speed than the 1333 MT/s for the 8GB (and slower than 1333 MT/s for the NLST 16GB HyperCloud).

    You can also see the problems with MetaRAM design – layout of chips is asymmetrical, and height increases considerably for the 16GB. It has the Hynix label on it.

    You can see the “discrete decoupling capacitor” (compare to “embedded passives” with NLST IP).

    And compare with NLST comments (also from earlier post above):


    http://www.netlist.com/technology/technology.html
    While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency. Stacking components results in unequal cooling of devices, causing one device to run slower than the other in the stack. This often results in module failures in high-density applications.

    The density limitation is solved by proprietary board designs that use embedded passives to free up board real estate, permitting the assembly of more memory components on the substrate. The performance of the memory module is enhanced by fine-tuning the board design to minimize signal reflections, noise, and clock skews.

  135. Hi netlist,

    No apologies necessary. All your efforts towards making this thread become as informative as it is are truly appreciated.

    The html element “blockquote” can be used to indent, like in my comment above.

  136. Netlist,
    I am confused. I see the following part on netlist website
    NMD2G7G3510BH-D85 16GB 1066MHz 2Rx4 16GB x4 4Gb DDP Planar LP

    Does DDP mean staked(?) devices ? Why is Netlist selling staked devices and not using
    their own proprietary technology ?

  137. Netlist,
    BTW, if staked technology is being used by Netlist then the metaram
    patent will apply ? any ideas on how metaram patent might limit netlist ?

  138. quote:
    I am confused. I see the following part on netlist website
    NMD2G7G3510BH-D85 16GB 1066MHz 2Rx4 16GB x4 4Gb DDP Planar LP
    Does DDP mean staked(?) devices ? Why is Netlist selling staked devices and not using
    their own proprietary technology ?

    How do you presume it is “stacked DRAM” ?

    NLST explicitly disparages stacked DRAM use by “other companies”:

    While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency. Stacking components results in unequal cooling of devices, causing one device to run slower than the other in the stack. This often results in module failures in high-density applications.

    The density limitation is solved by proprietary board designs that use embedded passives to free up board real estate, permitting the assembly of more memory components on the substrate. The performance of the memory module is enhanced by fine-tuning the board design to minimize signal reflections, noise, and clock skews.

    MetaRAM had other IP (including “stacked DRAMs”) which it DID NOT use against NLST. What does that suggest ?

    Instead it was one patent 7472220 that was used in
    retaliatory suit against NLST:
    http://www.freepatentsonline.com/7472220.pdf

    As posted above:
    7472220 – MetaRAM license to NLST ..
    http://assignments.uspto.gov/assignments/q?db=pat&qt=&reel=&frame=&pat=7472220&pub=&asnr=&asnri=&asne=&asnei=&asns=

    That patent is now licensed to NLST as part of settlement (and any buyer – GOOG or other – of this IP from MetaRAM will not be able to use it against NLST).

    From the PR at time of NLST/MetaRAM settlement:
    http://finance.yahoo.com/news/Netlist-Announces-Settlement-prnews-1484777084.html?x=0&.v=1
    Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
    Press Release Source: Netlist, Inc. On Thursday January 28, 2010, 1:25 pm EST

    A provision in the settlement protects Netlist if another company purchases MetaRAM’s patent and attempts to seek action against Netlist in the future.

  139. quote:
    I am confused. I see the following part on netlist website
    NMD2G7G3510BH-D85 16GB 1066MHz 2Rx4 16GB x4 4Gb DDP Planar LP
    Does DDP mean staked(?) devices ? Why is Netlist selling staked devices and not using
    their own proprietary technology ?

    “4Gb DDP” seems to be some type of memory as can also be seen here:

    http://www.intel.com/technology/memory/ddr/valid/ddr2_800_sodimm_results.htm
    M470T5267AZ3-CE7 Samsung K4T4G274QA-TCE7 4GB 4Gb(DDP) 8 5-5-5 0801 No

    More specifically:
    DDP = Dual Die Packaging

    Where you have two dies in same packaging (as opposed to the traditional one die in one packaging).

    That is, two memory chip wafer pieces inside one packaging.

    This is not the same thing as “stacked DRAM” (MetaRAM) which relates more to how you organize memory chip packages on a memory module.

    http://www.freshpatents.com/Memory-system-dt20080131ptan20080025128.php

    might be the “NetVault” line of products which CEO Hong has mentioned in Needham conference audio (which include onboard flash memory to backup memory module contents in case of power failure).

  140. Sorry cut the last para out about NetVault.

    I was half thinking that until appropriate google searches revealed DDP means something else.

  141. Netlist,
    It is not clear that staked and DDP are different. From,
    http://en.wikipedia.org/wiki/Dynamic_random_access_memory

    Stacked RAM modules contain two or more RAM chips stacked on top of each other. This allows large modules (like 512mb or 1Gig SO-DIMM) to be manufactured using cheaper low density wafers. Stacked chip modules draw more power.

    Does this not mean DDP = 2 dies in same package = staked ?
    Totally confused now.

  142. Another item from Netlist webpage
    http://www.netlist.com/technology/technology.html

    >>While some packaging companies stack devices to double capacity, Netlist achieves the same result without stacking, resulting in superior signal integrity and thermal efficiency.

    appears that DDP is same as staked ? No ?
    would be dangerous if hypercloud is using staked technology ?

  143. quote:
    Stacked RAM modules contain two or more RAM chips stacked on top of each other. This allows large modules (like 512mb or 1Gig SO-DIMM) to be manufactured using cheaper low density wafers. Stacked chip modules draw more power.
    Does this not mean DDP = 2 dies in same package = staked ?

    An attempt at explanation of the terminology:

    “die” – small stamp-sized piece of the shiny silicon wafer
    http://en.wikipedia.org/wiki/Die_preparation

    “memory chip” – die embedded within that black-plastic type stuff that people usually call a “chip” – has metal conductive pins coming out of it (shorter in the case of surface-mount chips).

    “memory module” – that stuff you put in the memory slot of your computer – comprising a circuit board (maybe sophisticated many layered or including resistor/capacitors within it – as with NLST’s “embedded passives”). Circuit board has many “memory chips” on it (see above).

    NLST technology lies not in “die”, or in “memory chip”. They buy the memory chip from Hynix and others (first NLST HyperCloud slated to use Hynix “memory chips”.

    So Hynix is a “memory chip” manufacturer.

    NLST is a consumer of those “memory chips” and a manufacturer of “memory modules”.

    NLST combines “memory chips” so they fit on a “memory module”. This they do by IP (intellectual property/patents) that includes “embedded passives”, plus IP on how to place “memory chips” for even heat dissipation. That is, there is IP related to how you structure a “memory module” i.e. how you use those “memory chips” to construct a “memory module”.

    In addition NLST has IP in extra circuitry that goes on the “memory module”. These are chips that NLST makes on it’s own – the buffer chip is a specialized ASIC for doing stuff with control signals, address lines and data lines that goto “memory chips” on the “memory module”.

    In addition NLST has some circuitry for “load isolation” so they only connect (perhaps imprecise here) some set of “memory chips” to be visible at a time etc.

    This is NLST’s purpose. They do not indulge in “memory chips” design, nor in “die” or wafer. They basically make complete “memory modules” that people can buy and put in their computer motherboard directly.

    So NLST is a “memory module” maker, and it has IP to back that up. That IP relates to how the “memory module” is made/structured as well as all the EXTRA circuitry that they have put on that memory module.

    MetaRAM was similar – they ALSO do (or did – now that they are prevented from doing so after settlement, and well .. bankruptcy).

    MetaRAM ALSO has IP in load isolation, and in “memory chip” placement on the memory module. Since “memory module” is usually a standard sized piece of circuit board (albeit advanced circuit board), they have to come up with ways to fit more memory chips on there. Their “stacked DRAM” IP relates to THAT aspect.

    NLST does not use “stacked DRAMs” because as pointed out above they feel it is inferior way of doing it.

    Coming back to MetaRAM – so MetaRAM ALSO makes “memory modules”. In addition they were willing to work with Hynix and others to either sell them the memory modules (i.e. completed memory modules) for resale, OR they were willing to share their IP with Hynix and others so OTHER companies could also do something similar. This is the “royalty-based” model (i.e. instead of just making all memory modules yourself). This model is exemplified by RMBS. NLST has referred to it in the Needham conference audio.

    quote:
    we have strong IP which create competitive barriers as well as provide future avenues for a royalty based business model

    The problem with MetaRAM was they don’t have IP in “embedded passives” etc. which means they not able to create more space on the same small circuit board as NLST can do.

    They try to fit more “memory chips” by stacking them i.e. “stacked DRAMs” or other stuff to fit in more chips on the same “memory module”.

    As noted above, that is not how NLST does it.

    A second problem with MetaRAM was their IP is from much later – and it could be said is “derivative” or inspired by NLST IP. You have to ask yourself why a high flier like MetaRAM (with support from INTC and others – with Hynix and STEC and others all planning to use their IP/buffer chips) – why MetaRAM suddenly closed shop ?. Was it related to the GOOG/NLST lawsuit and was there some realization within MetaRAM. Why did MetaRAM say they only sold $37,000 worth of stuff and “destroyed” it (from MetaRAM court filings) – what’s the hurry to “destroy” stuff ?. MetaRAM was trying to minimize the potential for infringement penalty.

    So basically NLST make and MetaRAM made whole “memory modules” – they did not make “memory chips” or “dies”.

    Inphi is similar – except they may not even make the “memory module”, but just the buffer chips and allied circuitry so others can make it. Difference is they hold even less IP than MetaRAM. Inphi is a component maker – they make lots of different components. They were hoping to step in after MetaRAM dropped out.

    So in summary:

    Stacked DRAM refers to stacking “memory chips” – and is a way of arranging the “memory chips” on the “memory module”.

    DDP – dual die packaging. This is when “memory chip” manufacturers like Hynix make “memory chips” with TWO dies in them.

    So the “memory module” that NLST/MetaRAM make can include a normal “memory module” or a DDP “memory module”. They thus label their memory module specs with “DDP” or no DDP.

    Hope this resolves the confusion between:
    DDP – this is done by “memory chip” manufacturers like Hynix etc.
    stacked DRAMs – this was done by MetaRAM in how it places those “memory chips” on “memory module”

    They are two different things – relating to things that go on at two different scales – one within the “memory chip” black plastic packaging, and one on the “memory module” circuit board.

    Hope this helps.

  144. Slight correction to sentence above ..

    quote:
    So the “memory module” that NLST/MetaRAM make can include a normal “memory module” or a DDP “memory module”. They thus label their memory module specs with “DDP” or no DDP.

    Should read:
    So the “memory module” that NLST/MetaRAM make can include a normal “memory chip” or a DDP “memory chip”. They thus label their memory module specs with “DDP” or no DDP.

  145. In a recent interview, NLST CEO said they strategically dedicated and spent over $10 million for R&D costs for products such as Hypercloud and NetVault. Not surprising that they are vigorously protecting investment in IP portfolio through negotiation and litigation as last resort. They seem to have facts and law on their side.

    First, MetaRAM settled ‘386 patent infringement in December 2009, and agreed to cooperate and stop its infringement. Will MetaRAM’s cooperation include full disclosure of relevant customer lists including in Google’s case?

    Next, Inphi must Answer by Feb. 11, 2010 to NETLIST’s Amended Complaint that added ‘912 and ‘274 patents in addition to initial allegations of ‘386 patent infringement. USPTO issued Patent 7,619,912, entitled “Memory Module Decoder” on 11/17/2009, and Patent 7,636,274, entitled “Memory Module with a Circuit Providing Load Isolation and Memory Domain Translation” on 12/22/2009. It appears Inphi hired a former employee of MetaRAM.

    On Google’s Declaratory Relief on Non-Infringement of ‘386 patent, and Netlist v Google case for ‘912 patent infringement, a case management conference is set for 3/4/2010.
    Is Google running out of time? As in above post, that judge Hon. Armstrong denied request to consolidate, and in essence said either settle or try the case on its merits, but no delays. Interesting to note that the Order strongly suggested parties “to engage in a settlement conference or mediation, sooner rather than later”. The judge also mentioned one of the reasons for denial was that the Court already held claims construction hearing and construed disputed claim terms. It seems that the Court ruled largely in favor of NLST patent claims construction. Same Court granted Netlist Discovery request to examine Google server. Subsequently, NLST sued Google alleging ‘912 patent infringement.

  146. Right.

    Also earlier, Judge Armstrong had denied GOOG fishing expedition “use of ‘386 patent prosecution history”. Though I can’t find that at the moment – but recall reading that somewhere.

    quote:
    First, MetaRAM settled ‘386 patent infringement in December 2009, and agreed to cooperate and stop its infringement. Will MetaRAM’s cooperation include full disclosure of relevant customer lists including in Google’s case?

    Yes, it would be interesting to note the exact settlement in NLST/MetaRAM.

    Even the possibility that IP beyond the one used by MetaRAM in retaliatory lawsuit could be compromised. However my gut feeling is that if the case had gone to jury trial and MetaRAM convicted, THEN that could have rendered suspect much of MetaRAM IP (even if sold to GOOG previously).

    However with a settlement, there is no legal bar on MetaRAM (or those who bought from MetaRAM) – i.e. there is no enforcement – except what MetaRAM owns now. So the IP most relevant to NLST (which MetaRAM still held for use in retaliatory lawsuit) was handed over to NLST (or licensed with bar on any future buyer to use it against NLST).

  147. On NLST’s investor SEC filing page, it shows report date of Feb. 11, 2009 for Renaissance Technology LLC owning more than 1.4 million shares as of Dec. 15, 2009.

  148. Updates on NLST vs. Inphi.
    Updates on NLST vs. GOOG.

    We have Inphi answer in NLST vs. Inphi. Standard boilerplate answer – we challenge patents etc.

    We have GOOG answer in NLST vs. GOOG. Standard boilerplate answer.

    However there is some interesting information in the GOOG answer about goings on at the JEDEC meetings (specifically “JEDEC JC-45″ committee meetings).

    From GOOG answer we find that:

    INTC presented their FB-DIMM quad-rank (4-rank) proposal in May, June, August and December 2007.

    GOOG says that at June 2007 meeting, NLST representatives “withheld” information that they held patents in this area and or were in process for new patents.

    At August 2007 meeting the same.

    At December 2007 meeting GOOG says that NLST revealed that it held IP which may apply to the FB-DIMM and 4-rank/quad-rank designs.

    However NLST was willing to provide access to that IP on RAND terms (as JEDEC members do as part of JEDEC).

    http://en.wikipedia.org/wiki/Reasonable_and_Non_Discriminatory_Licensing
    Reasonable and Non Discriminatory Licensing

    On Jan 8, 2008, NLST inventor Bhakta sent letter to JEDEC offering RAND terms “but only identified the ‘386 patent” (which is normal).

    This makes sense as the ‘912 patent is just a continuation of the ‘386. In superficial reading you cannot see any major difference between the two:

    http://www.freepatentsonline.com/7289386.pdf
    http://www.freepatentsonline.com/7619912.pdf

    However, since NLST complaint has referred to ‘912 patent (representing the ‘386 patent thread), GOOG has chosen to just focus on the ‘912 while not addressing the ‘386 patent which NLST could add as easily to the complaint (or which implicitly is perhaps included since ‘912 is a superset of ‘386).

    The answer by GOOG is reminiscent of some of the controversy in the RMBS/JEDEC tussle. There it was alleged that RMBS knew their designs were being standardized or in some cases they patented IP AHEAD of decisions by JEDEC (knowing that those areas will become valuable to JEDEC future direction).

    The case of NLST/JEDEC is simpler – here NLST IP predates (March 2004) the JEDEC standardization. The ‘386 patent had been issued (and NLST had announced it to JEDEC) prior to the JEDEC members voting for the standard.

    Also one of the inventors of 4-rank Bill Gervasi (while at NLST) later became JEDEC committee chair, as well as employee at SimpleTech.

    http://www.discobolusdesigns.com/personal/gervasi_modules_overview.pps
    Memory Modules Overview
    Spring, 2004
    Bill Gervasi
    Senior Technologist, Netlist
    Chairman, JEDEC Small Modules
    & DRAM Packaging Committees

    http://www.stec-inc.com/products/DRAM/4rank_DRAM.pdf
    4 Rank DRAM Modules
    Addressing Increased Capacity Demand Using Commodity Memories
    Bill Gervasi, VP DRAM Technology, SimpleTech
    Chairman, JEDEC JC-45.3
    January 19, 2006

    http://www.discobolusdesigns.com/personal/stec_atca_memory_20061017.pdf
    Memory Modules for ATCA and AMC
    Bill Gervasi
    Vice President, DRAM Technology
    Chairman, JEDEC JC-45.3

    Note that NLST in it’s complaint also alleges leakage of it’s IP to JEDEC. Or by Texas Instruments ?.

    I don’t know if Bill Gervasi is considered part of that leakage (that JEDEC benefitted from).

    Some info on JEDEC’s JESD82-20A – FBDIMM Mode C proposed standard etc.:
    http://www.jedec.org/download/search/JESD82-20A.pdf
    http://www.jedec.org/download/search/JESD82-28A.pdf

    JESD82-20A.pdf has the following disclaimer:
    Special Disclaimer
    JEDEC has received information that certain patents or patent applications
    may be relevant to this standard, and, as of the publication date of this
    standard, no statements regarding an assurance or refusal to license such
    patents or patent applications have been provided.
    http://www.jedec.org/download/search/FBDIMM/Patents.xls
    JEDEC does not make any determination as to the validity or relevancy of
    such patents or patent applications. Prospective users of the standard
    should act accordingly.

    The Patents.xls file is not available at that address now. However this demonstrates that there were IP infringement shadows cast on JESD82-20A standard.

    So why did GOOG violate knowing those caveats existed ?

    But for reference, here is the RMBS story:
    http://en.wikipedia.org/wiki/Rambus

    As can be seen their behavior was suspect in some cases – i.e. securing IP ahead of JEDEC decisions.

    However they have prevailed in court despite those negatives:

    http://www.mercurynews.com/business-headlines/ci_14224770
    Rambus wins $900 million from Samsung
    By Steve Johnson
    sjohnson@mercurynews.com
    Posted: 01/19/2010 05:22:07 PM PST
    Updated: 01/20/2010 03:04:17 AM PST

    It is now clear that NLST is claiming that FB-DIMM and 4-rank/quad-rank is infringing NLST IP.

    FB-DIMM is a major part of memory design roadmap – which introduced serial signalling and the use of the AMB (buffer) on the memory module. The AMB buffer part is probably the infringing part of the JESD82-20A standard.

    http://en.wikipedia.org/wiki/Fully_Buffered_DIMM
    The JEDEC standard JESD206 defines the protocol, and JESD82-20 defines the AMB interface to DDR2 memory.

    GOOG says there was no public info that ‘912 had also been filed (is this true – cannot search for in-process patent applications ?).

    GOOG answer suggesting that JEDEC voted for standard finally despite knowing of NLST IP issues (with ‘386 patent if not ‘912 patent) – quote:

    The Intel proposed changes to JESD82-20 were incorporated in JESD82-20A. The JEDEC members voted to issue the JESD82-20A standard, having all such JEDEC members, except for Netlist representatives, vote unaware of the patent application that led to the ‘912 patent.

    GOOG claims that that approval (without NLST participation) went on while “unaware of” the ‘912 patent in process.

    quote:
    Netlist has affirmatively attempted to disclose the ‘386 patent as relevant to certain JEDEC standards, with knowledge that they had filed a continuation of that patent, which was to issue as the ‘912 patent.

    GOOG says – quote:
    Netlist’s silence as to the patent application that led to the ‘912 patent induced the other JEDEC members to rely upon that standard being free of intellectual property encumbrances. The JEDEC members, including Google, were without information regarding the pending patent application that was to issue as the ‘912 patent when JEDEC issued the JESD82- 20A standard.

    What about the progenitor ‘386 patent ? That was already public information by that time. What “induced” JEDEC members (and GOOG being among them) to vote again for standard – knowing by then that it was conflicting with ‘386 patent (if not ‘912 patent).

    Focusing on ‘912 patent is a bit of a red herring – because NLST has used ‘912 in the complaint (as the latest of the ‘386 patent thread).

    What prevented JEDEC members from abandoning this standard (if they knew it was conflicted by then) ?

    JEDEC approval of a standard does not reduce the burden of securing IP that supports that standard.

    Why did GOOG go ahead with rampant use of NLST IP after that ? Ratification of standard at JEDEC does not automatically give people the right to use NLST IP – to do that you STILL have to negotiate for licensing terms (even if it is a JEDEC “standard” and even if they are expected to be on RAND terms).

  149. Netlist,
    Thank you for going through the goog response. However it does appear that
    Joe Soleiman (sp ?) of netlist who is an inventor of 912 attended JEDEC meeting
    and appeared to keep quiet about the 912 patent application. Hard for him to
    claim that he did not know ? It would be useful for goog to get 912 thrown out
    based on netlist cheating and focus only on 386. Don’t you agree with that strategy ?
    Also any chance that Netlist gets thrown out of JEDEC (like Rambus) and loses the
    opportunity to claim that their hypercloud modules are “JEDEC compatible” ?

  150. quote:
    However it does appear that
    Joe Soleiman (sp ?) of netlist who is an inventor of 912 attended JEDEC meeting
    and appeared to keep quiet about the 912 patent application. Hard for him to
    claim that he did not know ? It would be useful for goog to get 912 thrown out
    based on netlist cheating and focus only on 386. Don’t you agree with that strategy ?

    In my cursory reading of the two patents i.e. ‘386 and ‘912 I couldn’t find any serious difference between them. I suspect the only reason NLST refers to the ‘912 patent is that it is the newest incarnation of the ‘386 patent thread.

    Read through the two patents (links above) and see if there is any difference there.

    I suspect GOOG was in a hurry to file an answer before the deadline. Lawyers are also new (with changing of law firms). It is possible they just put some boilerplate stuff and threw in the ‘912 specific comments fully knowing it doesn’t really save GOOG since they have not addressed the ‘386 patent (which is part of ‘912 patent thread – ‘912 being a continuation of ‘386).

    ‘386 patent was known to JEDEC. And disclosed by NLST in December 2007 (from GOOG timeline it seems after NLST did this, the JEDEC members STILL went ahead and finalized standard).

    So why agree on standard despite finding out it infringes.

    And why did GOOG initiate use of infringing IP without first securing licensing rights from NLST ?

    There is something deliberate about this.

    Plus we don’t have full information about the leakage at Texas Instruments (and possibly Bill Gervasi and others who had worked at NLST at time of invention and later worked at JEDEC or STEC).

    In addition MetaRAM was patenting stuff throughout this period. Would have known about competitors’ IP.

    INTC was doing FB-DIMM with 4-rank/quad-rank presentations at JEDEC – it is naive to think they didn’t do patent searches on this area ahead of time.

    It is naive to think that just because NLST employees delayed “mentioning” patent continuation applications (‘912 was a patent continuation of ‘386 patent) which maybe routine in the industry, that that somehow impacts ability to know that ‘386 patent exists already.

    quote:
    Also any chance that Netlist gets thrown out of JEDEC (like Rambus) and loses the
    opportunity to claim that their hypercloud modules are “JEDEC compatible” ?

    I am not sure if NLST is even in JEDEC anymore. Wouldn’t surprise me to know they are not part of the memory module committee.

    There is a difference between RMBS and NLST – RMBS were radical designs which required major changes (and cooperation by motherboard makers and slot architecture etc. type of stuff). NLST is a plug and play (and requiring no BIOS update) solution that works with existing systems. So NLST has considerably fewer hurdles than RMBS – and RMBS has done well both in court and in licensing.

  151. Netlist,
    I think Netlist offered 386 patent to JEDEC (court filing says there was a letter from Jack Bhakka ?) so
    maybe 386 was not issue for JEDEC approval ? Only licensing had to be negotiated with Netlist. However
    looks like 912 does not have such letter so there is a difference in disclosures ?
    Texas Instruments and Gervase is interesting. Do you have any thoughts on how netlist can use
    that for more $$ ?

  152. quote:
    I think Netlist offered 386 patent to JEDEC (court filing says there was a letter from Jack Bhakka ?) so
    maybe 386 was not issue for JEDEC approval ? Only licensing had to be negotiated with Netlist. However
    looks like 912 does not have such letter so there is a difference in disclosures ?
    Texas Instruments and Gervase is interesting. Do you have any thoughts on how netlist can use
    that for more $$ ?

    Basically JEDEC knows standard falls awry of NLST ‘386 patent.

    Same for GOOG – which actually went ahead and used it – without paying a dime to NLST, or bothering to discuss it with NLST. GOOG may even have been the mover behind the other players to bypass NLST (if NLST claims are right – i.e. encouraging others to infring NLST IP i.e. companies it dealt with, encouraged to manufacture the memory and maybe even MetaRAM).

    You realize GOOG complaining “we didn’t know of ‘912 patent application” has the response that NLST just reverts to ‘386 patent in it’s arguments.

    This is what I was saying – that GOOG nitpicking on the “we didn’t know about ‘912″ is suggestive that they have no substantive argument against ‘386 patent and they make that case as a short-term strategy. That is, biding time till settlement or what ?

  153. quote:
    Texas Instruments and Gervase is interesting. Do you have any thoughts on how netlist can use
    that for more $$ ?

    I am not sure about Bill Gervasi – I dropped that name just to highlight that there is considerable promiscuity in this niche area with people moving from one company to another company and JEDEC etc.

    So the argument that people are not aware of patents is misleading (esp. after what happened with JEDEC/RMBS). Although initial patent applications may not be visible to others.

    Gervasi the “inventor” of “4-rank” (he is one of three patent authors) later went to work for STEC and chaired the JEDEC committee as well.

    NLST vs. Texas Instruments is under the radar – perhaps because one needs to goto the court to get the documents (or have them mailed). Couldn’t find it on PACER.

    http://www.faqs.org/sec-filings/091103/NETLIST-INC_10-Q/
    NETLIST INC – FORM 10-Q – November 3, 2009

    Trade Secret Claim
    On November 18, 2008, the Company filed a claim for trade secret misappropriation against Texas Instruments (TI) in Santa Clara County Superior Court, based in TI’s disclosure of confidential Company materials to the JEDEC standard-setting body. On February 20, 2009, TI filed its answer. The parties are currently engaged in settlement discussions. If those discussions are unsuccessful, the Company expects to vigorously pursue its claims against TI.

    Court website:

    http://www.sccaseinfo.org/civil.htm
    Search – enter “netlist” for business name.
    Gives the result:
    Netlist, Inc. 1-08-CV-127991 Netlist, Inc. Vs Texas Instruments, Incorporated Intellectual Property – Unlimited

    Which leads to:
    http://www.sccaseinfo.org/pa6.asp?full_case_number=1-08-CV-127991

    However it doesn’t seem to be on PACER, and seems to require going to court to get documents copied (or via mail).

    Maybe someone will be interested enough to get a copy of the documents (will shed light on the goings on at JEDEC/Texas Instruments).

  154. This seems to be an example of TXN (Texas Instruments) doing something similar to what Inphi is doing. That is, a buffer chip for “quad-rank/4-rank”.

    So does this make TXN similar to Inphi and others – i.e. if “4-rank/quad-rank” is what makes it infringing ?

    http://news.thomasnet.com/fullstory/818452
    DDR3 Register is designed for memory modules.
    DALLAS (April 15, 2008) – Texas Instruments (TI) (NYSE: TXN) today announced the industry’s first full production release of a phase locked loop (PLL) integrated DDR3 register for registered dual in-line memory modules (RDIMMs). This device enables system stability through constant clock and output delay over voltage and temperature variation. The single-chip quad rank support saves overall board space and reduces power consumption in servers, work stations and storage equipment. (See http://www.ti.com/sn74ssqe32882-pr.)

    TXN’s SN74SSQE32882 datasheet:
    http://pdf1.alldatasheet.com/datasheet-pdf/view/250017/TI/SN74SSQE32882.html

    http://focus.ti.com/docs/prod/folders/print/sn74ssqe32882.html
    SN74SSQE32882 Status: ACTIVE
    JEDEC SSTE32882 Compliant 28-Bit to 56-Bit Registered Buffer with Address-Parity Test

  155. With Netlist announcing volume production and shipment of its leading productrs and generating cash flow, Netlist should continue extensive and vigorous discovery on Google to prosecute what NLST claims is willful patent infringement. If Netlist wins, this case would be a landmark case and Google’s stellar image and pocket book may be damaged. Such a win would promote smaller companies’ investment in R&D and innovation as proprietary technology for volume production and generate solid ROI like Netlist is attempting to do. Apple, HP and Dell appears to have recognized the value of Netlist as an innovative company in contrast to Google’s apparent assessment.

    We should expect discovery dispute motions to reveal the extent of Google’s conduct. Google would undoubtedly file protective orders from having to turn over such documents. Since MetaRAM settled with Netlist, Google may be unable to deny meetings, collaborations, or claim non-existence of certain documents.

    If there are settlement discussions, Netlist should consider that after a full disclosure from Google. It would be particularly interesting to know the extent of collaboration with MetaRAM, Intel, Inphi and others, if any.

    If Netlist prevails after a jury trial, how much would the valuation expert articulate to the jury? If punitive or exemplary damages are awarded, will that have relevance to Apple’s litigation with HTC and implication to Google?

    Both Hypercloud and NetVault product lines should generate substantial cash flow for several years to come as in below link. http://www.netlist.com/investors/investors.html

    IRVINE, Calif., Feb. 17 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST), a designer and manufacturer of high-performance memory subsystems, today announces that a major OEM has commenced volume consumption of NetVault-NV™, a flash memory based non-volatile cache memory subsystem targeting RAID (redundant array of inexpensive disks) storage applications. NetVault-NV offers disaster recovery backup from system power failures and is optimized to ensure high system availability in RAID systems without using battery power for backup.

    Simultaneously, within the same product family, Netlist also announces that it is also at mass production status with a major OEM with its NetVault-BB (DRAM memory) based product, Netlist’s third-generation RAID cache solution. NetVault-BB offers disaster recovery backup from system power failures and ensures high system availability.

    NetVault-NV provides server and storage OEMs a solution for enhanced datacenter fault recovery, reduces system downtime and total cost of ownership. Unlike traditional fault tolerant cache subsystems, which rely solely on batteries to power the cache until the IT manager can recover business critical data and restore the system, NetVault-NV utilizes a combination of DRAM for high throughput performance and Flash for extended data retention. With NetVault-NV, data is retained for years following a disaster versus traditional subsystems, which often cannot preserve cache data for more than 24 to 72 hours.

    “NetVault-NV delivers the reliability and performance demanded by end users while reducing the total cost of ownership for this high reliability disaster recovery solution by eliminating the need for battery backup power,” said C.K. Hong, President, and CEO of Netlist. “As the only available flash based merchant market solution, NetVault-NV is also now entering volume production with a tier one OEM. Joining our recently announced HyperCloud product line, the NetVault family further demonstrates Netlist’s leadership in solving critical problems for the datacenter through innovative value-added memory subsystems.”

  156. Will Google capitalize competitively by being the plaintiff and dragging this court case out as far as possible in order to freeze industry acceptance of Hypercloud technology or will it admit an error in assessment of Netlist’s intentions and settle so that Netlist can go
    about its business of innovation. The latter would surely help other Google competitors become more efficient, cut cost and become more green. Maybe that’s Google’s strategy to stay ahead. Dragging this case out would be a cheap way to stay ahead of the competition. It could be a sign that Google is running out of original ideas and has no better business morals than any other big business. What ever it takes to stay ahead of the game.

  157. Hi Auditor,

    It would be interesting if this case went to a jury, but the vast majority of civil cases end with settlements.

    At the heart of this dispute isn’t the quality or value of the products that Netlist has to offer, but rather whether Google has infringed upon their patented technology. I don’t know the answer to that, but then again, I’ve never claimed to have any particular expertise in memory modules. I also don’t know what impact a legal victory over Google would have to how potential customers might treat Netlist in the market, or the impact on Apple’s litigation with HTC.

  158. Hi Billy,

    I wrote this post about Google’s unusual acquisition of a large amount of memory module technology from Metaram. The purchase is still somewhat of a mystery.

    I did take a look on the 25th last month in a Second Amended Joint Case Management Conference Statement (pdf) that I viewed, and it referred to “four-rank FBDIMMs” as being at the heart of the dispute. The conference was scheduled for March 4th, and I haven’t gone back yet to see what might have happened during that conference.

    I am interested in why you state that this dispute involves LR-DIMM, and what the implications of that would be if it did. I’m also not sure what interest you have in telliing visitors to this blog “Nothing to see here. Move along.” If you’d like to share, I’d be happy to listen. Thanks.

  159. Hi spencity.

    Possibly, but if I were in Google’s shoes, I would hate having to rely upon attempting to delay pending litigation as a tactic for hindering competitors.

    I do see a lot of the patents and whitepapers that Google publishes, and try to keep up with the technology that they hint at and release in other areas. It does seem like they still have more than a couple original ideas coming out of their Mountain View headquarters.

  160. Minor update on court cases.

    NLST vs. GOOG:

    – NLST filed answer to GOOG counterclaims in NLST vs. GOOG – nothing special there, just that NLST saying GOOG can’t claim NLST did this or that, when GOOG CONTINUED to violate NLST IP even after knowing about it, when JEDEC was told the new JEDEC standard violated NLST IP.

    NLST vs. TXN:

    – NLST filed some papers in NLST vs. TXN (Texas Instruments) – can’t get to it via PACER. Would be interesting to the contextual information in that case.

    http://www.sccaseinfo.org/pa6.asp?full_case_number=1-08-CV-127991
    NLST vs. TXN

  161. quote:
    Will Google capitalize competitively by being the plaintiff and dragging this court case out as far as possible in order to freeze industry acceptance of Hypercloud technology or will it admit an error in assessment of Netlist’s intentions and settle so that Netlist can go
    about its business of innovation.

    quote:
    It would be interesting if this case went to a jury, but the vast majority of civil cases end with settlements.

    The case is unlikely to go before jury trial – and may end much earlier because discovery will start to tackle issues with GOOG e-mail and what employees knew and when knew. If GOOG is going to settle, there is no point waiting.

    Meanwhile by “dragging it out”, it does no harm to NLST OR to the competition using NLST memory (I also doubt that is the way GOOG will go about it – since it is a negative way of doing things which does not fit GOOG culture and is a defensive measure rather than a proactive measure).

    The reason is that NLST is not waiting for JEDEC approval, or some “industry approval for a certain format” to “gain acceptance” – NLST HyperCloud memory modules are “plug and play”, require no BIOS update, and require no changes to the motherboard (like CSCO’s UCS strategy has motherboard changes to accomplish something similar).

    So GOOG delaying it’s approval just deprives NLST of business at GOOG. But it also delays GOOG ability to participate in NLST HyperCloud memory qualification and participation. With MetaRAM (the biggest player with industry support from Intel and others) gone, there are few options available in the market (apart from building their own memory as GOOG seems to have been doing ? or were they using MetaRAM ?). GOOG use of NLST HyperCloud is a natural fit, and GOOG would deprive ITSELF of opportunity to use it (while prolonging their infringing NLST IP), while all the competition could start using it right now.

    In addition, in the event of a failure, GOOG loses face, loses the case, treble damages, humiliation among the “do no evil” crowd, evisceration of old infringing memory from GOOG servers, perhaps even court audit of GOOG servers to see which are infringing which are not (to calculate damages).

    In fact GOOG gains FAR more by doing a nice deal with NLST. Probably will get excused for using infringing memory and can continue to use that – so no disruption to GOOG servers either.

    To be noted is GOOG’s under the radar behavior in this matter as well – the matter of who made the memory for GOOG – NLST accuses GOOG of “rallying the troops” in infringing NLST IP – and GOOG MAY have played a part at JEDEC rallying memory module makers to ignore NLST IP violations.

    It is intruiging how MetaRAM claimed they had “destroyed” the infringing memory products and it was worth only a small amount. This was very odd, given they were the darlings of the industry – INTC and other supporting, they were to supply major memory module makers with the buffer chips.

    So the story is not as crystal clear regarding GOOG’s behavior – possibly due to the conflict created by a hardware division that thought it could do things inhouse without caring about NLST IP.

    So in summary – in stark contrast to what USUALLY happens in such cases i.e. big company stringing small company along until they fail – THAT type of leverage is NOT available to GOOG in this case.

    Time is not on GOOG’s side, both in terms of escalation of severity, but also in terms of loss of access to NLST HyperCloud (which is a match for high memory-loaded systems). Using HyperCloud allows this niche area (servers running virtualization needing lots of memory, search function as at GOOG and such applications) to reduce the hardware you need (i.e. cases where CPU is too powerful, but not enough memory – so new server farms cannot capitalize on new CPUs with multiple cores if are limited by the memory loading capability of current systems – can’t load memory without reducing achievable speed – HyperCloud allows to bypass that problem).

    HyperCloud helps in:

    – reducing power requirements (memory module turns off banks that are not in use on the memory module)

    – greater memory-loading at same bandwidth (as you load memory, electrical issues limit the max speed achievable)

    – costs less – NLST uses “lower dollar per bit” memory chips to emulate “higher dollar per bit” memory chips. That is, can make a 16GByte module using 2GBit memory chips. If you don’t use the NLST IP, you have to use 4GBit memory chips. Since memory pricing is not linear it is a major cost savings for NLST – 4Gbit memory chips cost MORE than 2x what a 2Gbit memory chip costs.

  162. In TXN case, I noticed that the ex parte motion from Netlist was granted. Ex parte motion needs special urgent circumstances. That would be interesting to know what TXN is trying to compel NLST to produce.

    Today’s volume peaked over 8 mil shares which is curious. There has been no substantiated rumors. It would help to know which major OEM is doing volume consumption or signed for volume production. Cisco or Google seems far fetched at this point. Any recent transactions with Intel, AMD, Apple, Hynix, Samsung?

  163. quote:
    Today’s volume peaked over 8 mil shares which is curious. There has been no substantiated rumors. It would help to know which major OEM is doing volume consumption or signed for volume production. Cisco or Google seems far fetched at this point. Any recent transactions with Intel, AMD, Apple, Hynix, Samsung?

    There was a rumor today about CSCO/NLST linkup. The relations is more combatitive though since NLST solution-on-memory-module trumps CSCO’s UCS strategy (where they have to modify the motherboard to do the same thing – thus also rendering it non-standard).

    Here’s a much-repeated article which compares the two:

    http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
    Netlist goes virtual and dense with server memory
    So much for that Cisco UCS memory advantage
    By Timothy Prickett Morgan
    Posted in Servers, 11th November 2009 18:01 GMT

    It also doesn’t help that NLST has extremely small float – by calculation on yahoo board, it is like 4.79M to 7.13M that are unaccounted for if you exclude management and fund holdings. Total outstanding shares nearly 20M.

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=13357&mid=13357&tof=1&frt=2
    analysis of NLST float (from recent filings)

    Of those 4.79-7.13M shares, subtract the smaller shareholders and the ones who post on message boards and you have substantially fewer shares available.

  164. Just more reiteration of background info.

    http://www.wsw.com/webcast/needham35/nlst/
    Needham Growth Stock Conference
    January 14 at 2:30 pm ET (Jan 14, 2010)

    Corporate Presentation
    Needham Conference January 2010
    http://www.b2i.cc/Document/1941/Netlist_Needham_Conf_Jan_2010.pdf

    In this Needham presentation, CEO Hong outlines the factors at play:

    Servers traditionally have 24 sockets, most with 18 sockets.

    If populate half of those sockets i.e. 9 sockets, your achievable memory speed goes down from 1333 MHz to 1067 MHz. The more memory you add the LOWER the achievable speed.

    So for modern CPUs with multiple cores, it is the MEMORY which is holding it back.

    NLST HyperCloud allows to pack all the sockets and STILL run at 1333 MHz.

    From presentation above – get “100% more capacity at 66% higher memory speed” (probably with all sockets full, competitors run at 800 MHz, so 1333/800 = 1.66 – so 66% higher memory speed).

    HyperCloud also uses register and isolation devices to shut down power to certain DRAM devices when not in use, thus reducing power. For heavily memory-loaded servers, memory power consumption is significant.

    HyperCloud “tricks” the system into thinking of two 1Gbit memory chips as a 2GBit memory chip. Thus using “lower dollar per bit” memory chips instead of higher ones. PLUS this advantage always remains with NLST – as new higher density memory chips arrive, NLST can use the second tier memory chips (which will always be cheaper) to achieve same performance.

    HyperCloud is plug and play, requires no BIOS changes, and not changes to motherboard (which WOULD have required dealing with JEDEC and standardization and getting partners on board and such complications). HyperCloud requires no such thing – and can be used interoperably with regular memory.

    CEO Hong says done studies with OEMs – double memory capacity which increases efficiency by 50% – fewer serers required to do the same job in data centers/cloud computing.

    Heavy memory-loaded servers – memory cost is significant, so using cheaper memory is advantage.

  165. Billy,

    “3 Advantages of Designing with Micron LRDIMMs”. That statement was made on a Micron Technology promotional web page soliciting OEMs to use their new DOA LRDIMM memory module. That seems to imply that LRDIMMs are not plug n play with current jedec standards, but would complete in a market with Hyper Cloud which is plug n play. I say DOA, because Micron is asking OEMs to design their systems around the memory modules and set new standards that Mircron would need. HyperCloud is plug and play. OEMs can spend their development budget on increasing bus speeds rather than accommodating new standards. I wonder what HyperCloud’s max speed realy is since it can handle todays max bus speeds according to Netlist. Also, Mircon is using buffer chips from Inphi that Netlist has contested as infringing on their IP and is under litigation. The IP enabling speed and efficiency along with 4 rank memory addressing are all claimed by Netlist. The courts may have the final say, unless there is a settlement before hand. Micron seemed to be well into development late 2009 until the Netlist vs Inphi case was initiated. Where is Micron in development of their LRDIMM modules. Is that the “Nothing” to which you are eluding?

  166. Current servers can enjoy significant upgrades in efficiency and power by simply switching to HyperCloud. Can LRMDIMMs claim that?

  167. HyperCloud benefits usually if have heavily memory-loaded systems.

    On home-built systems you can overclock the computer and use higher speed memory also I guess.

    However the area HyperCloud is targeting is populated by speeds starting at 1333 MHz but decaying rapidly as you fill up memory slots.

    As outlined in above March 9 post, for 18 memory slot motherboards at 1333 MHz speed, as you fill 9 slots you are forced to run at 1067 MHz. When you fill all 18 slots, you have to run at 800 MHz (which is the “66% higher memory speed” figure which NLST quotes).

    So for high memory-loaded servers, the advantage of having a fast CPU is negated as you add more memory.

    NLST allows you to run with all 18 slots at the full 1333 MHz.

    The power efficiency and cost advantages are additional.

  168. Routine settlement conference in GOOG vs. NLST set for April 30, 2010 at 9:30am in front of Magistrate Judge (Elizabeth D. Laporte). Laporte was the judge both GOOG/NLST agreed on after Judge Trumbull was not available.

    quote:
    It is not unusual for conferences to last two to three hours or more. No participant in the settlement conference will be permitted to leave the settlement conference before it is concluded without the permission of the settlement conference judge.
    Parties are encouraged to participate and frankly discuss their case. Statements they make during the conference will not be admissible at trial to prove or disprove liability in the event the case does not settle. The parties should be prepared to discuss such items as their settlement objectives, any impediments to settlement that they perceive, whether they have enough information to discuss settlement and, if not, what additional information is needed and the possibility of a creative resolution of the dispute.

  169. I keep on feeling like some kind of surprise is going to jump out of this litigation. Not sure exactly what, but that’s the feeling I have.

  170. Did anyone here get a copy of the transcript from the Roth OC Growth conference yesterday? If so can you post it somewhere?
    I tried listening to the audio but the quality was terrible.

  171. It would be amazing if Netlist did not search for other partners after they were rebuffed by Google. They are a very ambitious company given the development of their recent products. They could be pursuing multiple paths to growth. Their HyperCloud module has widened data processing bottle neck. Its maximum speed may be greater than 1333MHz. Its possible that the same Netlist IP used to speed up a memory data bus could be used elsewhere in a digital circuit hereby increasing data processing/ routing speed. Mr. Chung Ki Hong was once president and CEO Infnilink that manufactured routers and other data equipment. I wonder if that was the bases of the rumor started earlier this week that Netlist collaborated with Cisco in the development of their new super router? Just food for thought. Facts are hard to come by until official news releases.

  172. It is possible that CSCO is using NLST IP for loading lots of memory into the router they produce, however that would require conventional motherboards being used (don’t know if they are in that product), and a longer standing relationship between CSCO and NLST than we are aware of.

    Secondly CSCO UCS strategy is neutered by NLST HyperCloud – CSCO UCS server have motherboards with the same type of circuitry except it is in the motherboard, thereby allowing greater loading of memory. NLST has all that on the memory module itself, thereby making them plug and play and not dependent on any widespread adoption of newer standard for motherboard etc.

    So they are competitors in that aspect at least. Don’t know how that affects the rumored collaboration story.

    My own feeling is that the rumor is misleading – i.e. someone expecting something from NLST and it got ascribed to CSCO radical new product announcement.

    Or it could be that CSCO is using NLST’s NetVault product – since routers include battery-backed RAM to store configuration information, it could be that they have chosen NLST’s NetVault product to do that instead of the standard battery-backed RAM. NLST’s NetVault backs up the memory information in the volatile RAM to flash memory (located on the same memory module) – the power required to do this quickly after power failure is supplied by a “supercapacitor” instead of batteries (thus reducing periodic replacement of batteries by on-site personnel).

    NLST NetVault could be very useful in products that are consumer-oriented – since while battery-backed stuff worked for data centers where they have personnel, there would be consumer applications which may open up once manufacturers know they CAN design products that don’t require battery maintenance to retain integrity of the device (i.e. manufacturer’s startup settings).

  173. Bill
    I got the transcript. Completely useless and unreadable. What I gathered was
    pretty much no new news – no OEM orders or any other news on litigation (at
    least in the transcript) !

  174. quote:
    I got the transcript. Completely useless and unreadable. What I gathered was
    pretty much no new news – no OEM orders or any other news on litigation (at
    least in the transcript) !

    That must have been one piece of creative writing (given the audio was so hard to decipher).

  175. Here is a bit of the transcript:
    And also a proceeding of major – operation to our department on the resulting on – the post office and important to maintain that – and we feel also – and the worldwide combined the – by holding certain of that and many to both number of – dollar material future to the proliferation of our computing and revolving traffic. And we are going to be – the workforce advice business makes anything. Do you think to create a huge computing and message do well?

    On Planet Earth,
    What worries me is no news of successful evaluations or status at OEMs.
    Netlist – is this delay with no news reasonable ? Or is there some problem with the hypercloud solution ?

  176. In conference audio, CEO Hong has mentioned “customers historically been”:
    IBM
    DELL
    HP

    Their SEC filings have mention of these OEM relationships.

    In addition NLST used to supply memory to AAPL – so was considered good quality producer of memory.

    The Register has come out with a newer piece on NLST – this time calling the raising of $15M as indicative of Wall Street confidence in the company’s technology:

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=14629&mid=14629&tof=1&frt=2
    Netlist’s HyperCloud memory gets Wall Street’s blessing
    Raises $14.1m in stock sale
    By Timothy Prickett Morgan
    Posted in Financial News, 23rd March 2010 06:02 GMT

  177. Sorry that was the yahoo board thread discussing that article.

    Here’s the direct link:

    http://www.theregister.co.uk/2010/03/23/netlist_public_float/
    Netlist’s HyperCloud memory gets Wall Street’s blessing
    Raises $14.1m in stock sale
    By Timothy Prickett Morgan
    Posted in Financial News, 23rd March 2010 06:02 GMT

    You will recall the earlier piece by theregister which is probably the best explanation so far for NLST HyperCloud:

    http://www.theregister.co.uk/2009/11/11/netlist_hypercloud_memory/
    Netlist goes virtual and dense with server memory
    So much for that Cisco UCS memory advantage
    By Timothy Prickett Morgan
    Posted in Servers, 11th November 2009 18:01 GMT

  178. quote:
    In addition NLST used to supply memory to AAPL – so was considered good quality producer of memory.

    This was before NLST did the restructuring (i.e. deemphasizing conventional memory which was a lossmaking operation for most memory producers at that time) and went on that prolonged effort to create what we now know as HyperCloud and NetVault – and the move to Suzhou, China (which seems to be a memory producing hub).

    NLST states that nearly 50% or so of their customers are in China – whether that is China-China or HP or other OEMs factories in Suzhou is unclear (since many U.S. companies operating in China). Reason for shift to Suzhou given was also that “we needed to be closer to our customers” or something like that.

  179. quote:
    Reason for shift to Suzhou given was also that “we needed to be closer to our customers” or something like that.

    And reduction in production cost.

    Probably easier to source memory chips also – since most of the memory producers are there as well – Hynix and others.

    NLST HyperCloud will be using Hynix memory chips (in the first batch at least).

  180. Updates on GOOG vs. NLST.

    From the following thread on NLST yahoo board:
    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=14733&mid=14811&tof=1&frt=2#14811
    Re: update on the various court cases 2 .. GOOG infringing memory

    We now know what types of infringing memory GOOG was using and who their contract manufacturers were (from court filings in GOOG vs. NLST – dockets 113 and 114).

    NLST has outlined the info collected from the depositions they have been taking from GOOG employees.

    From docket 113 (113-1):

    12. Through the 30(b)(6) testimony of Google, Netlist was able to learn several
    key pieces of information that inform and serve as the basis for the proposed infringement
    contention amendments. I took both depositions. Because Google designated the transcripts
    as “Confidential- Attorney’s Eyes Only,” they are not being submitted herewith to avoid the
    necessity to file them under seal. The corporate testimony that Netlist was finally able to
    obtain from Google during February and March 2010 included the following:
    • the identity of the different 4-Rank FBDIMMs supplied by Google and
    their part numbers;
    • the specific serial signal protocol used by Google’s “logic element”
    component of the accused 4-Rank FBDIMMs (called an “Advanced Memory Buffer” or
    “AMB”) and the manner in which the logic element is informed about the rank to which
    command and address signals are to be directed;
    • the maximum number of memory ranks to which control and command
    signals received by Google’s 4-Rank FBDIMMs may correspond;
    • Google’s use of infringing eight gigabyte (“8GB”) 4-Rank FBDIMMs
    and 2GB 4-Rank FBDIMMs in addition to 4GB 4-Rank FBDIMMs;
    • the manner in which Google’s AMBs generate output command signals
    such as row address strobe (“RAS”) signals, column address strobe (“CAS”) signals, and
    write enable (“WE”) signals to a selected rank of memory to execute DRAM commands such
    as read, write, refresh, precharge, etc.;

    • the specific AMB part numbers and suppliers used by Google;
    • the identity of the contract manufacturers who have assembled 4-Rank
    FBDIMMs for Google;
    • Google’s receipt of a letter from Netlist to JEDEC in January 2008
    which identified the ‘386 Patent and its relationship to the JEDEC AMB Quad Rank Support
    Standard that Google admits to practicing and Google’s actions in response to the Netlist
    letter;
    • Google’s admission that the AMB is a form of an application specific
    integrated circuit (ASIC);
    • Google’s use of edge connectors on its 4-Rank FBDIMMs to connect
    the modules to memory slots in its servers.

    Regarding pg. 4 (see below) mention of “Ilium” and “Icarus” servers, don’t know if this is a class of server, or are the specific servers that GOOG was asked to turn over to NLST for examination (which led to discovery of “Mode C” usage – being smoking gun for “4-rank/quad-rank” use).

    Searching google for those server names, turned up this:

    http://ruscoe.net/google/google-subdomains-internal/
    Google’s Internal Subdomains

    icarus.corp.google.com (7 May 2007)
    ilium.corp.google.com (7 May 2007)

    Regarding pg. 7 (see below) we see some of the contract manufacturers:
    Unigen
    Southland Microsystems
    Kingston
    Qimonda
    Entorian

    The timeline while known before, is spelled out again below – it shows that GOOG was caught in the headlights when NLST addressed it, and rather than deal with it, went to court instead to create an orderly environment for settling this issue (which is behind the server technology GOOG is using).

    From docket 114 (114-4) for case GOOG vs. NLST:

    (pg. 4)

    … Based on the information presently known to Netlist, the Accused Instrumentalities include memory modules bearing the following names and/or model numbers:
    1) 4-Rank, 2GB FBDIMMs: iGooFMM2, 07000752, 07000753, 07000754, 07000755, 07001780, 07001853, 07001834, 07002739, K17000752-753, K107000752-754 K107000752-755, S107000752-780 G107000752-853, G107000752-854, S107000752-739 S107000752-854:
    2) 4-Rank, 4GB FBDIMMs: iGooFMM44, 07000763, 07000764, 07000765, 07000766, 07001779, 07001852, 07002028, 07002028, 07002255, K107000763-764, K107000763-765, K107000763-766, S107000763-779, Q107000763-852, G107000763-028, Ql07000763-255, S107000763-028, iGooFMM4LP, 07005903, S107000763-xxxLP, and
    3) 4-Rank, 8GB FBDIMMs: GooFMM8, GooFMM8Q, 07002964, 07002970, QZ07002964-970.

    Google infringes the ‘386 Patent as follows:
    1. By making, using and/or importing the Accused Instrumentalities, including by operating computer servers known as “Ilium” and “Icarus” in which the Accused Instrumentalities have been installed.

    (pg. 5)

    4. By supplying components of the Accused Instrumentalities–including DRAM chips, quad-rank supported advanced memory buffers, and printed circuit boads–to third party contract manufacturers who make 4-Rank FBDIMMs for Google’s use, at Google’s request, and as instructed by Google, Google is actively inducing infringement ..

    (pg. 7)

    … The contract manufacturers include Unigen, Southland Microsystems, Kingston, Qimonda and Entorian. Google has instructed the contract manufacturers in the assembly of the Accused Instrumentalities with knowledge of the ‘386 Patent and intent of causing the contract manufacturers to directly infringe the ‘386 Patent.

    (pg. 8)

    (h) Basis for Assertion of Willful infringement (PLR 3-1(h))
    The basis for Netlist’s assertion of willful infringement includes the following:

    During 2007. Google had 4-Rank FBDlMMs assembled for it by its contract manufacturers and used the modules in its data center computer servers.

    Google was aware that the FBDIMMs were quad-rank enabled and understood that the modules utilized a quad-rank enabling method set forth in an Intel proposal called “AMB Quad Rank Support.”

    During late 2007, Google attended one or more JEDEC meetings at which Netlist announced that it may have intellectual property that might cover the Intel AMB Quad Rank Support proposal. Nevertheless. Google made no inquiry about the Netist IP and abstained from voting on the Intel proposal to conceal its use of quad-rank enabled AMBs from other JEDEC members.

    In January 2008, Google received a letter from Netlist to JEVEC which identified the ‘386 Patent by number and which stated that the 386 Patent might be required to implement “Mode C” of the AMB Quad Rank Support standard. Google has admitted to using Mode C in its 4-Rank FBDIMMs.

    In May 2008. Netlist wrote to Erick Schmidt, Chairman of the Board and Chief Executive Officer of Google, and informed Mr. Schmidt that Netlist had reason to believe that Coogle was using the technology claimed in the ‘386 Patent in its computer servers.

    Along with the May 2008 letter, Netlist provided Mr. Schmidt with a copy of the ‘386 Patent. Google never responded to the May 2008 letter.

    On June 4, 2008. Netlist’s counsel wrote to Mr. Schmidt again, stating that that Google had not responded to the May 2008 letter and again identifying the ‘386 Patent to Mr. Schmidt. A copy of the May 20 letter and its attachments accompanied the June 4. 2008 letter.

    Google did not respond to the June 4. 2008 letter. Thus on June 19, 2008, counsel for Netlist again wrote Mr. Schmiot and again attached the May 2008 letter and a copy of the ‘386 Patent.

    In July 2008, Netlist met with Google and made a presentation to Google’s counsel describing the ‘386 Patent and its relationship to the JEDEC AMB Quad Rank Support Standard. On August 28. 2008, Google filed this lawsuit.

    Despite being put on repeated notice concerning the ‘386 Patent and the infringement of its 4-Rank FBDIMMs, Google continued to have infringing modules made for it and continues to use them even now. There is no evidence that Google has obtained an opinion of counsel of non-infringement or invalidity or that it has taken any step to avoid infringing the ‘386 Patent.

  181. Netlist:

    Great job as usual.

    Given the delayed response by GOOG and the type of response they chose, I surmise that GOOG was using so many suppliers and the use of NLST IP was so extensive and that they not only needed time but that a stoppage of the use of the NLST IP would have been detrimental to the everyday operations of GOOG.

    That said, Since GOOG would be one of, if not, the biggest customer’s of this NLST IP that a cash settlement and additional considerations could be realistic.

    Do you see any chance of a settlement announcement prior to the hearing on the 30th?

  182. quote:
    Do you see any chance of a settlement announcement prior to the hearing on the 30th?

    I suppose that is possible, but no way to know.

    If I were GOOG I would settle early and get moving with NLST HyperCloud.

    GOOG lawyers may have other ideas though.

  183. Thank you for the updates, netlist.

    It’s hard to tell where this might eventually end up going, even with the facts that have come out so far.

  184. Not directly related to the NLST/GOOG legal thread here, but continuing earlier post above about GOOG data center construction:
    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-237024

    here is a video presentation of GOOG data center:

    http://www.greentechies.com/blog/2009/04/08/video-googles-energy-efficient-data-center/
    Video: Google’s Energy Efficient Data Center

    quote:
    CNET’s Stephen Shankland collected a bunch of Google produced videos on YouTube that discuss Google’s energy efficient container data centers…

    YouTube tour reveals Google data center designs

  185. netlist,

    The bullet points that you posted seem to strongly support Netlist’s arguement and gives Google little hope of winning a judgement. Are you aware of any testimony or evidence presented by Google that would give thim a credable chance of obtaining a judgement in their favor?

  186. netlist

    The bullet points I refered to in the previous message is from docket 114 (114-1)

  187. Hmm .. the previous post is confusing, reposting it below with some separators

  188. quote:
    The bullet points that you posted seem to strongly support Netlist’s arguement and gives Google little hope of winning a judgement. Are you aware of any testimony or evidence presented by Google that would give thim a credable chance of obtaining a judgement in their favor?

    Docket 113 is where NLST amendment to charges against GOOG (thanks to facts unearthed in discovery – part numbers, contractors GOOG was using etc.).

    From docket 113:
    —–
    (pg. 8 )
    Specifically, a significant part of the non-public material discovered by Netlist
    was obtained from the Rule 30(b)(6) deposition of Google’s staff design engineer, Rob
    Sprinkle, whom Google promised to produce at the beginning of January yet ultimately
    refused to produce until February 18, 2010. Hansen Dec. at ¶ 11. As described above,
    Google also was “unable” to provide key information in response to Netlist’s written
    discovery concerning the structure and operation of the accused 4-Rank FBDIMMs, and
    refused to provide any specifics in its non-infringement contentions.
    —–

    Circumstantially, GOOG behaviour has been one of evasion. For example they did not answer NLST challenge with even a counter claim, but instead went rapidly to court (GOOG vs. NLST). It is possible that GOOG thought the case was indefensible (cannot give any answer) and so they took it to the court forum immediately.

    This excerpt gives some sense of recent happenings, and you can get a flavor for the types of defence GOOG is mounting from this:

    From docket 113-1:
    —–
    (pg. 2 )
    After assuming responsibility for this case, Netlist’s current counsel began
    reviewing hundreds of thousands of pages of information produced by Google and serving
    written discovery directed to the structure and operation of Google’s accused 4-Rank
    FBDIMMs. On September 10, 2009, Netlist served its second set of interrogatories and first set of requests for admission on Google. The requests for admission were directed to several pieces of infringement information, including the nature of the specific electrical signals received by the logic element (called an “advanced memory buffer” or “AMB”) of Google’s accused 4-Rank FBDIMMs.

    Attached as Exhibit B hereto is a true and correct copy of Plaintiff Google Inc.’s Responses to Netlist’s Request for Admissions Set No. One [Nos. 1-26],” dated October 27, 2009 (“Google Response to Netlist’s RFAs”). As set forth therein, Google stated that it could not respond to Requests Nos. 6, 11, 13, 14, 16, 17, 19, 21, 23, 24, 25, and 26 concerning its own 4-Rank FBDIMMs because it “lack[ed] sufficient information.” Id. at 8, 11-15 (Request Nos. 6, 11, 13, 14-17, 19, 21, 23-26)

    6. Attached as Exhibit C hereto is a true and correct copy of “Plaintiff Google
    Inc.’s Responses to Netlist’s Interrogatories, Set No. Two [Nos. 6-9],” dated October 27, 2009. As stated at page 5 therein, Google identified Robert Sprinkle as an employee who is knowledgeable about the structure and operation of the accused 4-Rank FBDIMMs and gave the following answer as to why it contends that its 4-Rank FBDIMMs are allegedly non-infringing:

    Google does not infringe any claim of the ‘386 patent because one or more
    elements required to be present by each claim is missing from Google’s accused
    products, both literally and under the doctrine of equivalents. For example, the
    accused products do not include a structure that meets the “logic element”
    limitation because they nowhere include the functionality that is claimed in that
    limitation.

    7. On November 8, 2009, Netlist accepted Google’s offer to produce Mr.
    Sprinkle for deposition on January 8, 2010. Google also insisted that Netlist simultaneously
    depose Mr. Sprinkle in his individual capacity and under Rule 30(b)(6) concerning the
    “structure, function and operation of the accused Google instrumentalities” on the agreed-upon date. Attached as Exhibit D is a true and correct copy of an e-mail from me to Shelly K. Mack of Fish & Richardson as well as Ms. Mack’s November 15, 2009 response thereto concerning the foregoing.

    8. On December 4, 2009, Netlist filed a patent infringement lawsuit (Case No.
    09-05718-SBA) against Google alleging infringement of U.S. Patent No. 7,619,912 (the
    “’912 Patent”), which had just been issued by the United States Patent & Trademark Office on November 17, 2009. The ‘912 Patent is a continuation of the ‘386 Patent that is at issue in this case. In the present action, Netlist filed an Administrative Motion to Consider Whether Cases Should Be Related and Consolidated under Civil Local Rule 3-12 on December 17, 2009 (Document 83). Along with the motion, Netlist filed a joint stipulation signed by both parties requesting consolidation of this case and Case No. 09-05718-SBA. After the Court related but did not consolidate the cases, the parties filed a Joint Motion to Consolidate Cases, on January 6, 2010. Document 85. The Court denied the consolidation motion on February 3, 2010. Document 95.

    9. On December 29, 2009, Netlist contacted Google and informed Google that in
    the Court’s order relating this case and Case No. 09-05718-SBA the Court had not ordered
    consolidation of the cases. Netlist also requested that Google identify the topics to which Google’s 30(b)(6) witnesses would testify, and in particular, those to which Mr. Sprinkle would testify on January 8, 2010. In response, Google informed Netlist that it would not produce any witnesses for deposition until the Court ruled on the parties’ request for consolidation. Attached as Exhibit E hereto is a true and correct copy of an e-mail from Google’s counsel, Shelly Mack, to me dated January 4, 2010 stating the foregoing.

    10. In view of Google’s actions, Netlist re-served its Rule 30(b)(6) deposition
    notice and Mr. Sprinkle’s deposition notice on January 6, 2010. The notices set the date for Mr. Sprinkle’s deposition on February 4, 2010 and for the corporation under Rule 30(b)(6) on February 5, 2010. Attached as Exhibit F hereto is a true and correct copy of a letter from Lauren Gibbs of Pruetz Law Group to Shelly Mack of Fish & Richardson, dated January 6, 2010, along with the deposition notices for Robert Sprinkle and for the corporation under Rule 30(b)(6) which accompanied Ms. Gibbs’ letter.

    11. In late January, the King & Spalding firm replaced Fish & Richardson as
    Google’s counsel of record in this lawsuit. Mr. Sprinkle was not produced for deposition on
    the previously noticed date of February 4, 2010. Instead, he was produced on February 18, 2010. Mr. Sprinkle was designated to testify to topics 1(a-m), 2, 6, 8, 9, 10, 11, 13, 16, 17, and 18 of Netlist’s Rule 30(b)(6) Notice, a copy of which is attached as Exhibit G hereto. In addition, Google produced Andrew Dorsey to testify to topics 3, 4, and 12 and to a portion of topic 5 on March 11, 2010. Attached hereto as Exhibit H is an e-mail from Scott Weingaertner of King & Spalding, dated February 14, 2010 specifying Google’s designations of Rule 30(b)(6) witnesses. Although Norm Haus is identified on Exhibit H, Google ultimately produced Andrew Dorsey in his place on March 18, 2010.

    12. Through the 30(b)(6) testimony of Google, Netlist was able to learn several
    key pieces of information that inform and serve as the basis for the proposed infringement
    contention amendments. I took both depositions. Because Google designated the transcripts
    as “Confidential- Attorney’s Eyes Only,” they are not being submitted herewith to avoid the
    necessity to file them under seal. The corporate testimony that Netlist was finally able to
    obtain from Google during February and March 2010 included the following:
    • the identity of the different 4-Rank FBDIMMs supplied by Google and
    their part numbers;
    • the specific serial signal protocol used by Google’s “logic element”
    component of the accused 4-Rank FBDIMMs (called an “Advanced Memory Buffer” or
    “AMB”) and the manner in which the logic element is informed about the rank to which
    command and address signals are to be directed;
    • the maximum number of memory ranks to which control and command
    signals received by Google’s 4-Rank FBDIMMs may correspond;
    • Google’s use of infringing eight gigabyte (“8GB”) 4-Rank FBDIMMs
    and 2GB 4-Rank FBDIMMs in addition to 4GB 4-Rank FBDIMMs;
    • the manner in which Google’s AMBs generate output command signals
    such as row address strobe (“RAS”) signals, column address strobe (“CAS”) signals, and
    write enable (“WE”) signals to a selected rank of memory to execute DRAM commands such
    as read, write, refresh, precharge, etc.;

    • the specific AMB part numbers and suppliers used by Google;
    • the identity of the contract manufacturers who have assembled 4-Rank
    FBDIMMs for Google;
    • Google’s receipt of a letter from Netlist to JEDEC in January 2008
    which identified the ‘386 Patent and its relationship to the JEDEC AMB Quad Rank Support
    Standard that Google admits to practicing and Google’s actions in response to the Netlist
    letter;
    • Google’s admission that the AMB is a form of an application specific
    integrated circuit (ASIC);
    • Google’s use of edge connectors on its 4-Rank FBDIMMs to connect
    the modules to memory slots in its servers.
    —–

    You get the general impression that GOOG is not too eager to present their own employees’ side of the case (generally indicative of a fear that early discovery will only harm GOOG case) – this is especially odd given this case was brought by GOOG to protect itself against injunction (if NLST went to court to stop GOOG servers):

    From docket 113:
    —–
    (pg. 5 )

    Netlist also sought to obtain deposition testimony from Google concerning the
    structure and operation of its accused 4-Rank FBDIMMs. Google identified Robert Sprinkle as “a person employed by Google who is knowledgeable about the structure and operation of the accused products.” Id. at 5. Google offered Mr. Sprinkle for deposition on January 8, 2010, and Netlist accepted. Hansen Dec. at ¶ 7, Exh. D. Google also insisted that Netlist simultaneously depose Mr. Sprinkle in his capacity as a fact witness and a Rule 30(b)(6) witness. Id.

    In early December, Netlist filed a related lawsuit against Google (Case No. 09-05718-
    SBA) alleging infringement of U.S. Patent No. 7,619,912, which had just issued on
    November 17, 2009. Hansen Dec. at ¶ 8. Following the filing of the lawsuit, the parties jointly requested consolidation of this lawsuit and the ‘912 Patent lawsuit, which the Court ultimately denied on February 3, 2009. Hansen Dec. at ¶ 8. During the pendency of the request for consolidation, Google refused to produce any witnesses for deposition, including Rob Sprinkle:

    Because the Court has not yet ruled as to whether two suits between the parties
    will be consolidated, Google will not be presenting Mr. Sprinkle for deposition
    on January 8th, and will not be presenting any witnesses for deposition (whether
    in individual or 30(b)(6) capacities) until the scope of the case is clarified and, if
    the cases are consolidated, a new schedule is entered. Fact discovery remains
    open until the end of March, and there is no looming deadline that makes it
    important for Netlist to take Mr. Sprinkle’s deposition in early January.

    Hansen Dec. at ¶ 9, Exh. E (emphasis added). On January 6, 2010, Netlist again noticed Mr. Sprinkle’s fact and Rule 30(b)(6) depositions, this time for February 4-5, 2010. Hansen Dec. at ¶ 10, Exh. F.

    In late January, Google replaced its counsel in this lawsuit, and Mr. Sprinkle was not
    produced for deposition on February 4 or 5. Hansen Dec. at ¶ 11. Instead, he was ultimately produced on February 18, 2010. Hansen Dec. at ¶ 11. On March 11, 2010, an additional Rule 30(b)(6) witness, Andrew Dorsey, provided testimony about the manner in which Google induces the infringement of the ‘386 Patent by supplying contract manufacturers with components and directions for assembling 4-Rank FBDIMMs. Hansen Dec. at ¶ 11, Exh. H. Google designated both Mr. Sprinkle and Mr. Dorsey’s depositions as “Confidential – Attorney’s Eyes Only” under the Court’s Protective Order, contending that the information provided by the witnesses was not publicly available. Hansen Dec. at ¶ 13.

    —–

    You can conclude how clueful GOOG’s defence is about their own actions within GOOG from the above statement:

    —–
    quote:
    As set forth therein, Google stated that it could not respond to Requests Nos. 6, 11, 13, 14, 16, 17, 19, 21, 23, 24, 25, and 26 concerning its own 4-Rank FBDIMMs because it “lack[ed] sufficient information.” Id. at 8, 11-15 (Request Nos. 6, 11, 13, 14-17, 19, 21, 23-26)
    —–

    And this statement:

    —–
    quote:
    As stated at page 5 therein, Google identified Robert Sprinkle as an employee who is knowledgeable about the structure and operation of the accused 4-Rank FBDIMMs and gave the following answer as to why it contends that its 4-Rank FBDIMMs are allegedly non-infringing:

    Google does not infringe any claim of the ‘386 patent because one or more
    elements required to be present by each claim is missing from Google’s accused
    products, both literally and under the doctrine of equivalents. For example, the
    accused products do not include a structure that meets the “logic element”
    limitation because they nowhere include the functionality that is claimed in that
    limitation.
    —–

    From above-mentioned Exhibit C (GOOG response to NLST questions or “requests”) you can see the aspects that GOOG is avoiding answering at this time – giving a few of the responses that NLST claims (see above) GOOG was unable to answer below. In most of them, GOOG says it “lacks sufficient information” to answer questions about it’s own product:

    From docket 113-1:
    —–
    (pg. 26 )
    Subject to, without waiving, and based upon the foregoing objections, Google responds as follows: Google admits that certain FBDIMMs used in certain of its servers follow the Mode C serial channel communication protocol set forth in the JEDEC standard for the respective DRAM used on the DIMM To the extent not admitted, Google lacks sufficient information to admit or deny this Request. Google reserve the right to supplement or amend its response at an appropriate time.

    —–
    (pg. 29 )

    REQUEST FOR ADMISSION NO. 11:

    In certain of Google’s serves, at least one Google AMB receives bits (“Google’s AMB Input Bits”) from the server’s memory controller.

    RESPONSE TO REQUEST FOR ADMISSION NO. 11:

    Google incorporates by reference each of the General Objections. Google further objects to this Request as vague and ambiguous as to at least the terms “Google AMB,” “receives” and “memory controller.” Google further specifically objects to this Request on the basis of General Objection No. 2, above, concerning the “bit” terms.

    Subject to, without waiving, and based upon the foregoing objections, Google responds as follows: Google lacks sufficient knowledge or information to admit or deny this Request at this time. Google reserves the right to supplement its response at an appropriate time.

    —–
    (pg. 10 )
    REQUEST FOR ADMISSION NO. 13:

    In certain of Google’s servers, at least one Google AMB receives DRAM Address Bits from the server’s memory controller.

    RESPONSE TO REQUEST FOR ADMISSION NO. 13:

    Google incorporates by reference each of the General Objections. Google further objects to this Request as vague and ambiguous as to at least the terms “Google AMB,” “Address Bits” and “memory controller.” Google further specifically objects to this Request on the basis of General Objection No. 2, above, concerning the “bit” terms.

    Subject to, without waiving, and based upon the foregoing objections, Google responds as follows: Google lacks sufficient knowledge and information to admit or deny this Request at this time. Google reserves the right to supplement its response at an appropriate time.

    —–

  189. In Exhibit C we have GOOG responding to second set of questions by NLST. Here NLST asks reasons for why GOOG did not admit certain things in first set of questions:

    From docket 113-1:
    —-
    (pg. 46 ) – Exhibit C

    INTERROGATORY NO. 9:

    For each request for admission that Ooogle did not admit in Netlist’s First Set of Reques4 for Admission of Plaintiff Google, inc., served September 10, 2009, please explain why Google did not admit the request, and identify all documents that support the basis for Google’s response to the request and persons with knowledge of the basis for Google’s response to the request.

    RESPONSE TO INTERROGATORY NO. 9:

    Google incorporates each of the foregoing General Objections as if set forth fully in response to this interrogatory. Google further objects to this interrogatory to the extent it calls for information protected by the attorney-client privilege, the work product doctrine or any other applicable exemption from discovery. Google further objects to this interrogatory as over broad, unduly burdensome, and duplicative to the extent it requests Google to re-state information that it has previously provided, or is concurrently providing, elsew here.

    Subject to and without waiving the foregoing objections, Google responds as follows: Google’s responses and objections to Netlist’s First Set of Requests for Admission are fully compliant with the requirements of Federal Rule 36, and as such, those responses and objections adequately disclose the reasons for Google’s denials and partial denials. Google incorporates those responses and objections here by reference.

    —-

  190. Recently NLST has expanded the scope of charges against GOOG. GOOG has not accepted this expansion (because only one week remains before close of fact discovery – from e-mail dated Mar 25, 2010 – pg. 99 of docket 113-1).

    So if GOOG needs time, we may see court delay end of discovery (maybe). Or they may deny NLST expansion of charges because it is expanding the complexity of the case.

    It is interesting how GOOG delayed Sprinkle testimony (awaiting consolidation which didn’t happen) and now is saying there is not enough time for discovery.

    It seems NLST has been deposing (taking testimony from) GOOG AMB (buffer chip) suppliers – pg. 103 of docket 113-1 (Exhibit N) refers to IDT and NEC:
    —-
    quote:
    Netlist also just deposed Google’s AMB supplies, IDT and NEC, this week, and will use 
    the information from those depositions to support its ‘912 Patent Infringement Contentions. Thus, any alleged 
    “delay” will ultimately inure to Google’s benefit in the form of more detailed and specific infringement 
    contentions. 
    —-

    In addition GOOG was set to depose JEDEC on March 30.

    Regarding JEDEC, we see that JEDEC did circulate a letter to members about NLST’s contention that the JEDEC “Mode C” standard was infringing NLST IP:

    From docket 113-1
    —-
    pg. 44 (Exhibit C – pg. 6)

    INTERROGATORY NO. 7:

    State the date on which Google first became aware of the ‘386 patent, the patent application that issued as the ‘386 Patent, any patent application to which the ‘386 Patent claims

    priority, and/or any Nelist patent application disclosing and/or claiming memory density multiplication, memory rank decoding, and/or memory rank multiplication; describe the circumstances leading to such first awareness, including the identity of the person(s) involved, the identity of all documents which refer or relate to such first awareness, and/or the circumstances leading to such first awareness.

    RESPONSE TO INTERROGATORY NO. 7:

    Google incorporates each of the foregoing General Objections as it set forth fully in response to this interrogatory. Google further objects to this Interrogatory to the extent it calls for information protected by the attorney-client privilege, the work product doctrine, or any other applicable exemption from discovery. Google objects to this Interrogatory as calling for the production of information that is neither relevant nor likely to lead to the discovery of admissible evidence to the extent it requests information cuncemir^lg patents other than the ‘386 patent in suit. Google will respond concerning the patent in suit only. Google further objects to this Interrogatory as vague and ambiguous as to at least the terms “memory density multiplication,” ‘memory rank decoding,” and “memory rank multbplicabon.” Google further objects to ibis Interrogatory as over broad and unduly burdensome to the extent it would require an investigation into the aforcmenboned irrelevant patents conceding vague and ambiguous subject matter, which have no bearing on this case.

    Subject to and without waiving the foregoing objections, Google responds as follows: Google was first made aware oftbe ‘386 patent in suit by an e-mail from Mr. Phileasher Tanner of JEDEC to various JEDEC mailing list recipients, including Mr. Rob Sprinkle and Mr. Andrew Swing of Google on or about Jan. 10, 2008, forwarding a Nelist patent disclosure letter concerning the patent. This e-mail, and the attached letter, were produced by Google in this matter as GNET034096-97 and GNET269919-20.
    —-

  191. Hi netlist,

    It’s interesting to see how the pre-courtroom drama is playing out amongst Google and Netlist. Thank you very much for the updates.

  192. NLST yesterday got qualified by SuperMicro (SMCI).
    So this is now the first third-party validation of NLST technology.

    For a good overview of what this means, check out these threads:

    http://messages.finance.yahoo.com/Business_%26_Finance/Investments/Stocks_%28A_to_Z%29/Stocks_N/threadview?bn=51443&tid=15147&mid=15147
    Here’s why Supermicro deal is a BIG DEAL

    http://messages.finance.yahoo.com/Business_%26_Finance/Investments/Stocks_%28A_to_Z%29/Stocks_N/threadview?bn=51443&tid=15172&mid=15172
    Putting numbers on the Supermicro Deal

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=12961&mid=15178&tof=10&frt=2#15178
    Inphi seems to have had a delay

  193. quote:
    NLST yesterday got qualified by SuperMicro (SMCI).
    So this is now the first third-party validation of NLST technology.

    http://finance.yahoo.com/news/Supermicro-Qualifies-Netlists-prnews-2550444882.html?x=0&.v=1
    Supermicro Qualifies Netlist’s HyperCloud Memory on High-Density Servers
    Netlist’s HyperCloud and Supermicro Optimize Server Utilization
    Press Release Source: Netlist, Inc. On Monday April 12, 2010, 6:00 am EDT

    “With Netlist’s HyperCloud memory, our servers empower customers to improve their productivity and to support memory-intensive applications such as cloud computing and virtualization,” said Wally Liaw, vice president of sales at Supermicro. “HyperCloud helps us to uniquely position our high memory footprint servers with unprecedented levels of performance in these growth markets.”

    By optimizing server utilization, HyperCloud improves datacenter economics associated with memory intensive, high performance computing applications and workloads, including virtualization, cloud computing, online transaction processing, video services and storage. Servers in these datacenters are typically underutilized due to memory bandwidth and capacity bottlenecks. Improving performance while lowering operating and capital expenses in datacenters, increases utility out of new and existing servers.

  194. netlist
    Thanks for the information. I am just becoming familiar with the legal side of the technical field, so it is taking me time to wrap my mind around the information that you supplied. Correct me if I’ve got it wrong.
    I gather that Google is using 4 rank memory addressing in many of their servers but may have implemented it using their own ASIC and command codes, thus attempting to give Goggle protection against infringement claims. In my opinion, Mode C is Mode C no matter how it is implemented. Its almost certain that Google got the idea from Netlist and Netlist holds the patents protecting many elements to Mode C. The court will decide if that patent is broad enough to protect against superficial changes.
    Netlist had a chance to completely optimized the codes claimed as its IP. Often there is only one optimized code for any CPU operation such as memory operations. Even if Google wins by using a different code, it may be second rate code (less efficient) to Netlist’s IP. That efficiency is hard to make up through ASIC design since the ASIC is subservient to the CPU. This could mean that Google looses memory access speed because of inferior code. Moving Tera byte upon Tera byte of data over time using less efficient code adds up to real money. Even if Netlist looses their claim because of a slightly different code, competitors may have to either use Netlist’s code as written and respect IP, or put out an inferior product using inferior code.
    It looks like Google stands a chance of shooting itself in the foot twice, since its giving Netlist an opportunity to see exactly how it implemented its flavor of Mode C, and may be delaying or preventing the use of a more eloquent technology from Netlist.

  195. GOOG was informed (via letter from JEDEC to it’s member companies – of which GOOG was one) of the conflict pointed out by NLST in the JEDEC “Mode C” proposed standard.

    Yet GOOG continued to be blase about it and continue contracting NEC and IDT to make the AMB (buffer chips) for use on the memory modules and explictly going out and manufacturing or having these things manufactured and then going out and using them in it’s servers.

    That is a high degree of complicity.

    In recent filings (April 14, 2010), GOOG is claiming it answered “don’t know” to many questions about the buffer chips because they don’t know what they do – that is an odd assertion given that it is GOOG which is CONTRACTING with these subcontractors to deliver them these buffer chips and to manufacture the stuff.

    From docket 117:

    (pg 3. )
    Google has been forthcoming about its limited knowledge and understanding of the accused 4-Rank FBDIMM products.

    (pg. 5 )
    Google had (and continues to have) insufficient knowledge because it neither designed nor manufactured the components at issue—the AMBs, which are being accused as the “logic element” of the asserted claims.

    The hardware that GOOG had manufactured seems to have been following the JEDEC proposed standard or somewhat close to that. NLST has taken testimony from NEC and IDT (AMB buffer subcontractors for GOOG) regarding the design of the AMB and how it relates to JEDEC proposed standard.

    As long as GOOG is using “Mode C”, it is a telling indicator that it was continuing to pursue a path (that JEDEC itself is wary of and had warned it’s members of) of violation of NLST IP.

  196. quote:
    In recent filings (April 14, 2010), GOOG is claiming it answered “don’t know” to many questions about the buffer chips because they don’t know what they do – that is an odd assertion given that it is GOOG which is CONTRACTING with these subcontractors to deliver them these buffer chips and to manufacture the stuff.

    Also while GOOG maybe entitled to claim this WHILE claiming NLST IP is not related to GOOG, I would think it weakens GOOG’s argument attacking NLST IP if they are not clear about the technology that they ARE using.

    If they don’t know what they are using, how can they claim non-relatedness to NLST IP ?

    Since GOOG vs. NLST is GOOG attempt to deflect imminent injunction closing GOOG servers (if NLST requested it after complaining to GOOG sometime back), then there IS a burden on them to show evidence that distinguishes their technology from the IP claimant (NLST).

    One could argue the burden of proof is on NLST – however we are talking about a hush hush internal manufacture of hardware by GOOG, and some degree of cooperation IS required by GOOG. What NLST can show is circumstantial evidence that on the face of it GOOG IS violating NLST IP – that has been shown in discovery where GOOG server turned out to be using “Mode C”, and GOOG has accepted that it DOES use “Mode C”.

    The question naturally arises that even if GOOG doesn’t know WHAT it’s doing – it’s just following JEDEC proposed standard and telling subcontractors to “just use that”, where was the caution when JEDEC informed members that proposed standard was “iffy” because of potential violation of NLST IP.

    Instead GOOG continued doing what it was doing, and it was not until NLST asked questions of GOOG directly that GOOG went directly to court (did not even know what to answer directly to NLST) in order to “manage” the process of (possibly) eventual concession to NLST (in an orderly court environment).

  197. In any case, this undermines the thesis that GOOG has great IP in this area (like MetaRAM – even though MetaRAM also conceded to NLST).

    Basically GOOG is relying on JEDEC proposed standard or such stuff – after all when it contracts with NEC and IDT to make buffer chips but leaves it up to them – they must be following some standard or method – likely the JEDEC proposed standard (of which GOOG was a part).

    If so, it boils down to JEDEC proposed standard vs. NLST dictating GOOG fortune in this case.

  198. “Don’t know”

    Well that certainly is an interesting defense. GOOG must be squirming pretty bad to come up with that one. It may have worked for Ronald Reagan in the Iran Contra trial “I don’t recall” but ignorance is not an excuse for braking the law in the case Patent law.

    It almost seems while claiming stupidity they are also try to blame the companies they contracted to manufacture for them. If I was the judge this kind of crap in the court room would piss me off.

    Has this approach actually ever worked in a case like this that anyone is aware of?

    Why is GOOG delaying a settlement?

    They don’t really think “don’t know” carries merit?

    Seems like the kind of defense the guy that has all the money and knows he is wrong uses in an attempt to bleed out the smaller guy.

  199. quote:
    Has this approach actually ever worked in a case like this that anyone is aware of?
    Why is GOOG delaying a settlement?
    They don’t really think “don’t know” carries merit?
    Seems like the kind of defense the guy that has all the money and knows he is wrong uses in an attempt to bleed out the smaller guy.

    Recently NLST expanded the charges against GOOG – as outlined in this post above:
    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-263652

    Docket 117 is the GOOG argument for why new charges should not be admitted now.

    As part of that argument, GOOG attempts to answer why their “don’t know” answers were not really a delaying tactic – so that explanation (i.e. the subcontractors did it) is part of that argument.

    I was highlighting that because it reflects badly on GOOG’s overall case – it pisses off their subcontractors, it feigns ignorance of GOOG’s role in JEDEC proposed standard. And it undermines GOOG’s determination as a cognizant party – in effect it gives the impression they were juvenile in temperament (which undermines their credibility in fighting an IP case against NLST).

    To be fair to GOOG, docket 117 is not really their “defence”, but are actually the arguments they are making to prevent NLST from expanding the charges against GOOG.

    this isn’t actually their defence (i.e. “don’t know”).

  200. Netlist – thank you as usual!

    If GOOG and NLST are scheduled for a settlement conference on the 30th why is GOOG still submitting filings to the court. Is this normal? Are they still trying to argue the case?

  201. Settlement conference is routine – doesn’t mean much.

    Meanwhile GOOG has to fight whatever it can – notice that NLST is piling on charges on GOOG, so GOOG has to have rebuttals. So that is what we are seeing. That doesn’t mean that settlement cannot happen. Court business still has to be dealt with on a day to day basis.

  202. The only way Google would have a chance of getting around Mode C is to custom build a non JEDEC server that would have different addressing protocols. That would add extra cost to servers and memory modules. If Google is not using the inexpensive 2 gig memory chips that Netlist’s IP enables it to use, Google would come up short . There would be obstacles concerning addressing, speed, power and thermal management. Google approach makes no since unless the new design delivers a quantum leap in over all performance per dollar spent. There is no substitute for “on board memory” when it comes to servers. I think the high price of custom built, non-JEDEC servers and memory modules would only make sense if Google is going into the server manufacturing business. Potential server purchasers would be leery of hitching their wagons to a competitor using non standard components, because Google would control much of the IP. I wonder what is Google’s end game?

    I recall that Google prided itself on desinging its own specialized hardware when it was smaller, and more nimble. Focus can be easlily lost with hyper growth. An error only becomes a mistake if its not corrected. Netlist may prove that small, and nimlbe is still best for focused goals.

  203. quote:
    I recall that Google prided itself on desinging its own specialized hardware when it was smaller, and more nimble. Focus can be easlily lost with hyper growth. An error only becomes a mistake if its not corrected. Netlist may prove that small, and nimlbe is still best for focused goals.

    Google’s “custom hardware” was essentially use of generic hardware in ways that could be scaled. The search problem was designed to be scalable, and they tried to make the hardware side scalable – based on many cheap servers which could be scaled as needed.

    It is in GOOG’s interest if memory technology becomes standardized – memory costs go down.

    In the absence of a good solution – and with JEDEC proposed standard – GOOG tried to make early what was coming down the road later. However they went failed to account for the legal ownership of the IP – their behavior was consistent with their beginnings as a academic type of organization. However it ignores their reality as a technology behemoth of a company.

    Within GOOG, their hardware division got involved in the memory project – so a bit of internal engineering momentum (and because they are a consumer of their own hardware division) may have blindsided their legal department (?).

    However, I am thinking that if there WAS internal momentum, it should have dissipated by now – and the move out of Fish & Richarson (#2 in IP litigation) to King & Spaulding (#2 in ARBITRATION, but not in top 30 in IP litigation) seems like a very deliberate move – and may have come from the very top of GOOG (after a meeting perhaps “ok, guys what do we do”).

    So probably GOOG NOW knows what they have to do – it’s just that the legal team do what is best to achieve good balance if there ever are settlement discussions.

    NLST had already offered RAND terms to JEDEC. It seems the clear way forward is to have GOOG pay some penatly (or some future business – SMCI qualification of NLST probably goes a long way towards strengthening NLST position as having something tangible to offer – a working replacement for the stuff GOOG is doing currently).

    And to have JEDEC get good terms for it’s member memory module makers – for JEDEC proposed standard.

    However it would seem that a mamory module maker would be more interested in getting the NLST HyperCloud IP license than the JEDEC proposed standard (which may require BIOS changes).

    Or at least they would want NLST IP for the short-term, with JEDEC proposed standard for later (when BIOS updated boards become the norm) – maybe they have to pay less licensing fees with that.

    Total speculation of course.

  204. There may have been a misjudgement by the hardward division of Google. I have heard the term “patent troll” used by Google in th past. I don’t know if they used that term refering to Netlist directly. The fact that Netlist has a certified product using the contested IP blowns that notion clean out of the water. Given, that the company has a tendency to use such references gives insight to elements of Google’s enternal culture. I speculate that the only way that this gets t

  205. I speculate that the only way this gets to court is that
    Google’s councel thinks that it has a much better than an 50% chance of winning. A lost could be a public relations desaster. Meanwhile, discovery will buy time.

  206. In my opion, it would be amazing if Google had no knowledge of infringing technology because they left it up to the individual suppliers to determine how to designed memory modules for Googles use. That could result in a hodge-podge of different specs, and performance charicteristics. Any company making that kind of investment would want to have tight control of what they are getting for their moneny.

  207. Recently NLST expanded the charges against GOOG – as outlined in this post above:
    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-263652

    GOOG opposes claiming they “didn’t know” how their memory module components operate and were not delaying etc.

    In response NLST files a pretty interesting read on events.

    —-
    Given the foregoing, Google seeks to confuse the Court by arguing that Netlist “has had detailed knowledge of AMBs for years” based upon its participation in the industry’s standard setting process and analyses of third-party products. Opposition p. 3:1-15. Google then contrasts Netlist’s expertise with its own professed ignorance as to the function of AMBs. Opposition p. 3:16-17 (“In contract to Netlist’s detailed knowledge of AMBs, Google, as an acquirer of 4-rank FBDIMMs and the AMBs they include, has only limited knowledge of the products”).

    In doing so, Google argues that Netlist somehow lacked diligence in amending its infringement contentions because Netlist would be more knowledgeable than Google on the function of AMBs in the computer memory industry and could have sought the information from
    other third parties earlier.

    Google’s argument is a red-herring; Netlist’s knowledge of AMBs in the computer
    memory industry in general is irrelevant. Indeed, if the information Netlist sought was generally
    known in the industry, Google would presumably not have designated the deposition transcripts
    of its Rule 30(b)(6) witnesses as “Confidential-Attorney’s Eyes Only.” Hansen Dec. ¶ 12.
    Netlist needed and sought information on how Google used the third party AMBs it purchased
    and whether and how its use of the AMBs conformed to JEDEC standards.

    The proposed amended infringement contentions concern specific information regarding Google’s infringing use of AMBs with its 4-Rank FBDIMMs. This information includes details such as the number and type of “memory devices” (i.e., DRAM chips), the physical connection between the FBDIMMs and a computer server, the physical connections between the AMBs and the DRAM chips and the number of ranks in which the DRAMs are configured, the types of 4-Rank FBDIMMs used by Google (2GB, 4GB, 8GB), and the specific manner in which the AMBs operate when used by Google, including the types of commands they perform and how they perform them. Google delayed producing this information until the February and March 2010 depositions of its Rule 30(b)(6) witnesses. While some of the information may have been included in the hundreds of thousands of pages of documents that Google produced, the Rule 30(b)(6) witnesses were critical in navigating those documents and establishing their relationship to the structure and operation of the accused products.

    For example, when the current infringement contentions were created, Netlist only knew that Google was using 4 Gigabyte 4-rank FBDIMMs using rank multiplication technology, infringing upon claims 1 and 11 of the patent-in-suit. However, upon taking the deposition of Mr. Sprinkle on February 18, 2009, Netlist learned that Google’s AMB is a form of an application specific integrated circuit (ASIC), as recited in claim 5 of the ‘386 Patent. At Mr. Sprinkle’s deposition, Netlist also learned that claim 9 of the ‘386 patent was implicated by the specific manner in which Google’s AMB is informed about the rank to which command and address signals are to be directed. Hansen Dec., ¶ 12.
    —-

    And some information on how the GOOG server was made available for inspection – and the flip-flopping by GOOG on “Mode C” being used:

    From docket 133 (133-main):
    —-
    (pg. 5 )
    For example, Google sought to block Netlist from inspecting a Google server to obtain information and photographs of the server and its 4-Rank FBDIMMs. The information and exhibits were eventually used to examine Google’s witnesses.

    In an effort to convince the Magistrate Judge to deny Netlist’s inspection request, Google admitted that its 4-Rank FBDIMMs operated in accordance with “JEDEC Mode C,” an infringing mode of operation. Joint Letter of the Parties to Magistrate Judge Spero, dated May 19, 2009 at 5 (See Declaration of Steven R. Hansen in support of Netlist’s Reply Brief, dated April 20, 2010 (“Hansen Reply Dec.”) at ¶ 2, Exh. A).

    Magistrate Judge Spero eventually ordered the inspection to go forward. See May 29, 2009 Order Granting Defendant’s Request For Production No. 12 For Inspection Of A Functioning Google Server [Docket No. 31]). Further, in October 2009, Google admitted in responses to Requests for Admission that it uses “Mode C.” Google’s Response to Netlist’s RFAs at 6-7 (Hansen Dec., ¶ 5, Exh. B). However, on the evening of the last day of discovery, Google “supplemented” its responses to deny using Mode C:

    May 19, 2009 Letter to
    Magistrate Judge Spero
    (page 5) (Document 27;
    Hansen Reply Dec., ¶ 2, Exh.
    A) (original emphasis)

    “Google does not dispute that
    its FBDIMMs operate in
    Mode C . . . .”

    October 27, 2009 Response
    to Netlist’s Request for
    Admission No. 3 (Hansen
    Dec., ¶ 5 Exh. B)

    “Google admits that certain
    FBDIMMs used in certain of
    its servers follow the Mode C
    serial channel
    communication protocol set
    forth in the JEDEC standard
    for the respective DRAM
    used on the DIMM.”

    March 30, 2010
    Supplemental Response to
    Netlist’s Request for
    Admission No. 3 (Hansen
    Reply Dec. ¶ 3, Exh. B)

    “Google lacks sufficient
    knowledge and information
    to admit or deny this Request
    and therefore denies it.”

    —-

    And on GOOG’s “lack of knowledge”:

    From docket 133 (133-main):
    —-
    (pg. 7 )
    While Google protests that it “lacks knowledge” of how its own 4-Rank FBDIMMs are configured and operate, it is undisputed that when Google’s 30(b)(6) witnesses were finally deposed (the protracted history of Google’s failure and refusal to timely produce its witnesses its detailed in Netlist’s moving papers), Mssrs. Sprinkle and Dorsey had plenty of non-public knowledge regarding them. Hansen Dec., ¶ 12. Thus, Google’s suggestion that “Google’s lack of knowledge undermines rather than advances Netlist’s position” is false. Opposition p. 5.

    Google’s effort to hide behind its purported “lack of knowledge” is nothing more than a tactic used to avoid meeting its discovery obligations.

    As such, Google delayed in revealing highly-relevant information by obstructing inspection of its server, delaying the production of its 30(b)(6) witnesses, and denying requests for admission based upon lack of knowledge when in fact its witnesses had extensive knowledge of the subject. After going to such great lengths to delay disclosing relevant information, Google cannot now complain about the timeliness of Netlist’s request to amend its infringement contentions; Google itself is responsible for any delay in the production of the information necessitating the proposed amendments.
    —-

    It seems NLST is claiming GOOG manufactured “hundreds of thousands of computer memory modules”.

    I am not sure if it is that much – but maybe NLST has information on how common these modules are in current GOOG setup.

    Also contains an explanation of “Mode C” (and NLST IP).

    From docket 133-2 (which outlines earlier arguments – which eventually led the court to refuse GOOG arguments and grant NLST request for examining GOOG server):

    —-
    (pg. 2 )
    This is a patent infringement case. Netlist owns IP relating to computer memory modules, and it shared some of those inventions under NDA with Google while Netlist’s patents were pending. Google turned down a business relationship with Netlist. Netlist alleges that Google then went on to manufacture hundreds of thousands of computer memory modules using the Netlist technology, and that it now uses those memory modules in server computers at Google data centers.1

    One of Netlist’s production requests was for an allegedly infringing Google server. Google has refused to produce one. In its Request for Production No. 12, Netlist requested that Google produce a server containing the Accused Devices, including all software, firmware, and/or “register-setting code” used in the operation of the Accused Devices. The purpose of this request is to allow Netlist to verify that the FBDIMMs used in Google servers function as described in the ‘386 patent.

    In particular, for Google’s FBDIMMs to be infringing, they must be capable of being set to run in what the industry standard-setting body, JEDEC, refers to as “Mode C”. In general terms, Mode C fools the computer into thinking that it is accessing two sets of memory chips, when in fact its access requests are split among four less-expensive sets of memory chips. This yields tremendous cost and energy savings. When an infringing server is turned on, it sets the appropriate registers for Mode C, and it can then report that it has the amount of memory contained on the memory module in those four sets of chips (e.g., 4 gigabytes).
    —-

    And GOOG’s answer there includes acceptance of “Mode C”.

    From docket 133-2:
    —-
    (pg. 6 )
    Netlist can determine FBDIMM operation from the code Google has agreed to make available, including the code that controls Mode C operation, and FBDIMM configuration can be determined from the design files and the product itself. Google does not dispute that its FBDIMMs operate in Mode C and the code it has agreed to produce is the code relating to Mode C operation.
    —-

    From docket 133-2 (NLST letter to GOOG – more on “Mode C”):
    —-
    (pg. 16 )
    Netlist Requested that Google produce a server so that it can test-among other things-Google’s FBDIMM functionality. The registers controlling the memory module running in what JEDEC refers to as “Mode C” are set by and on the CPU, not the FBDIMM. This is a central infringement issue, and theeefore simply producing a memory module and code is insufficient. Similarly, because Netlist argues that Google induces infringement by providing services using the infringing servers, Netlist needs an operational server rather than a memory module in order to prove up it’s case.
    —-

    GOOG fought production of server tooth and nail – all the while trying to placate with production of FBDIMMs, their circuit board plans and code, but to avoid embroiling the servers.

    Eventually the court ordered GOOG to produce the server (which is what led to discovery against GOOG, and helped expand claims against GOOG).

  208. Netlist,

    I have to admit the “settlement conference” is a confusing subject to me. I was of the impression that if it were to be ordered by the court that, it was probably do to the fact that the Judge saw a definite violation by one side. Now if I understand it correctly what it is essentially doing is forcing the sides to declare what they are looking for in a post trial settlement? Am I correct?

    Obviously I have not read all the documents (only what you have been kind enough to share), but this seem pretty one sided. What does GOOG hope to gain by continuing to delay or going to trial? Are they simply trying to bleed the cash out of NLST?

  209. In most cases with jury trials, the court forces the parties to a “settlement conference” with some arbitrator/judge in order to give them the chance to settle early i.e. out of court settlement – thus saving the court the expense and bother of going to jury trial if the matter can be dismissed/resolved early.

    However if the parties are not amenable to settling this early, the settlement conference usually produces no result.

    That is why I have noted that the settlement conference date is interesting but not necessarily means anything.

    http://en.wikipedia.org/wiki/Settlement_conference
    quote:
    Matters discussed in a settlement conference are confidential, and cannot be introduced as evidence in court. All such information would be considered privileged or hearsay. There is one exception to this rule: statements of fact made by criminal defendants in settlement discussions over disputed civil claims asserted by government agencies are admissible in the criminal case.

  210. netlist,

    Has Netlist given any indication of what they are looking for in the settlement? It seems that the assertion that Google has installed “hundreds of thousands” of mode C infringing modules signals that Netlist believes that Google has committed tremendous damage. How often are agreements reached during Settlement conferences in cases similar to this one?

  211. I have not found indication of what NLST is looking in settlement.

    Though CEO Hong has signalled that:

    From Q4 2009 CC (Feb 18, 2010):
    quote:
    cannot disclose litigation .. vigorously protect
    not in litigation business .. will seek reasonable settlements

  212. Well based on this information. I can’t see why both sides shouldn’t be able to establish an understanding for settlement. As it stands now the only ones gaining are the lawyer’s (unless there is something we don’t know yet).

    Either way – should be interesting.

    Thanks for all the insight.. Great board!

  213. Netlist,
    It seems in charioteer with the leadership of Netlist to be looking forward, as evident by the development of their recent products. The protection of their IP could be worth more than a punitive sized settlement. Google would look hypocritical if they reject a reasonable settlement in which the ownership of IP is the major issue, when it complains about the lack of respect for its own IP in China. Thanks for all of the useful information that you have supplied.

  214. My best guess is that Google assumed that because Mode C is an industry standard, they were free to use it without infringing any patents. Netlist had notified Google, as well as the concerned industry standard body that Mode C may infringe their intellectual property. Although they had a letter from NetList, they took the view that NetList is a patent troll.

    Using other people’s patents even after you have been told about them seems evil to me.

    I think this case will cause a lot of changes in the way Google looks at Patents. Recent news reports that they have patented various parts of their architecture are significant. Open source clones of their architecture may now be infringing their patents. Although Google as per its policy has no plans of suing others of infringement, except as a defense.

  215. Could the lack of news out of the settlement conference indicate both parties are looking for a timely solution? The Supermicro computer server model SYS-6026T-NTR+-GS015 with 288 gig of HyperCloud memory demonstrated at Interop 2010 is good timing. Wonder how it stacks up against Google’s Icarus server?

  216. Don’t know – settlement conference may not mean anything as is mandatory by the court. Their settlement dynamics maybe moving at their own pace, with settlement conference just an official date to be ignored or to go to with eyes rolling.

  217. As expected, nothing happened on the April 30, 2010 “mandatory settlement conference” between GOOG and NLST in front of Judge Laporte for GOOG vs. NLST.

    Outcome was “did not settle” (docket 136 in GOOG vs. NLST).

    Parties are required to attend such settlement conferences in order to give every opportunity to avoid the time and resource expenditure by the court for jury trials.

    However the parties may have their own timeline for when they want to settle – so these become a formality.

  218. NLST had recently asked the court for permission to expand the claims against GOOG.

    Judge Armstrong (Docket 134 in GOOG vs. NLST) denied permission to expand the case.

    NLST claimed that since GOOG had delayed in discovery process, NLST got testimony from GOOG employees at a late stage and that helped them craft additional claims against GOOG.

    Judge Armstrong has in the past denied things which might slow GOOG vs. NLST – the joint GOOG/NLST request to consolidate GOOG vs. NLST and NLST vs. GOOG was denied earlier for similar reasons. It would have slowed GOOG vs. NLST down which was in an advanced state.

    Given that precedent, it was not surprising that Judge Armstrong denied expansion of claims as it would disrupt the existing GOOG vs. NLST case.

    I assume this still leaves NLST vs. GOOG (which has a later timeline) open for addition of those claims by NLST.

    However as indicated in the past, if GOOG/NLST settle early, it will include all cases (and may even include some JEDEC licensing of NLST IP for the JEDEC “Mode C” proposed standard). So the outcome of the NLST vs. GOOG maybe moot, as it will probably become part of whatever settlement in GOOG vs. NLST.

    This outcome was suggested in an earlier post “it would keep things on track”:

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=15757&mid=15843&tof=1&frt=2#15843
    Re: vicl2010v scammed you all
    quote:
    —-
    Other than that there is a May 4 (today) hearing on NLST’s extra claims against GOOG.

    If approved they would add to the burden against GOOG. If Judge Armstrong disapproves the extra claims, it would keep things on track.
    —-

    Judge Armstrong has earlier also said the parties should try to settle the case “sooner rather than later” – when she earlier denied GOOG/NLST joint request to consolidate GOOG vs. NLST and NLST vs. GOOG.

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=te&bn=51443&tid=14993&mid=15027&tof=1&frt=2#15027
    Re: A Lot Of 100 Lots Going Through

  219. (Caveat: I have not read the original lawsuit or other papers. On the other hand, I have read this thread and the associated hyper linked material in detail. )

    Extent of damages depends on extent of infringement, and willfulness of infringement.

    Extent of infringement will depend on how many servers Mode C was used on.If it was in use in hundreds of thousands of server the damages will be more.

    This case is unusual- Google apparently had an ASIC manufactured for memory expansion that is compliant with JEDEC Mode C. They had been told that Mode C was NetList IP. The JEDEC standards body had also been told about it. They continued to use Mode C- they decided that NetList was a patent “troll”. Google cannot say that the legal department knew about the NetList claim, but not the hardware engineering group. This defense if allowed would make mockery of the legal system. Any company would be free to argue that they are just a large company where the left hand does not know that the right hand is infringing.

    Willful infringement penalty is 3 times the damages. The judge has discretion to increase or decrease the damages. Subsequent Google conduct such as refusing to admit or deny infringement, delaying the discovery process will not be looked at favorably. Even saying that NetList is a patent troll may not be looked at favorably at the stage of calculating damages. Would Google say the same about an established large company in high tech? As it happens, there is only one other company that can give the same amount of memory as the upstart Netlist.

    I am hoping that Google is forced to settle for hundreds of millions of dollars, and then is forced to buy from NetList for 3 to 7 years. Many companies which pride themselves on being ethical have a policy when settling law suits. They will settle a lawsuit in which they are guilty of wrong doing. Instead of the boiler plate- “Without admitting wrongdoing, we pay millions of dollars”- they will say- we admit our actions were wrong, and are taking steps – such as better decision making processes and training – to make sure this wrong doing does not happen again. Stealing patented ideas, willfully is “evil”.

  220. Do you think that this case will make it to court? Sames that the cost of bad pr from an unfavoraable court outcome would be enough to motivate an early settlement by Google, not to mention the delaying the adoption of Netlist prouducts.

  221. quote:
    Do you think that this case will make it to court? Sames that the cost of bad pr from an unfavoraable court outcome would be enough to motivate an early settlement by Google, not to mention the delaying the adoption of Netlist prouducts.

    Delaying adoption – as in delaying adoption of HyperCloud within GOOG.

  222. Yes. Adopting HyperCloud within Goog is a possibility, depending on how Hypercloud technology stacks up to Googles memory modules. Can you get the specs for Goog’s modules? That information seems to be confidential. If the difference is great enough in Netlist favor, Goog would be a shrewd to adopt HyperCloud or some other Netlist IP, and lower its overall settlement and memory cost going forward. Having Goog as a customer for years to come would be a substanial victory for Netlist. Goog could also turn an embarressing situation into an example of how to manage mis-steps, while maintaining it’s gleeming coorperate image.

  223. I don’t think the issue of whether GOOG’s memory is better is an issue. It probably doubles memory, but probably does not allow full speed operation.

    The issue is: It probably infringes NetList Patent, and the infringement is perhaps willful- considering NetList had notified them.

    GOOG cannot possibly say: “We lack sufficient knowledge” about whether we infringe. This defense will not be accepted during trial. It seems laughable for a tech giant, to say they do not know if something which they designed and manufactured infringes someone’s patents. Normally companies say: We do not infringe, as we work around the claims- or they say the patent is invalid. I have not seen a defence like this.

  224. Comments/analysis of Netlists first quarter 2010 results going forward?

    http://www.netlist.com/investors/investors.html

    http://www.bizjournals.com/sanfrancisco/prnewswire/press_releases/California/2010/05/11/LA03022

    Anyone:

    1. Have close by multiple industry source market past, current, future projections for DC server potential growth 1-10 yrs out (i.e., demand pool)? As well: top 5 server consumers, penetration, region.

    2. Visit Netlist at Interop 2010 Vegas and see the “Data Cnter in a Box” with thoughts/impressions?

    Joeq: did you see that news article of ‘papers flung from a red peugot’? started a big fire back in the day…

    Question: with regard to bandwidth, how well does it scale? What is the bandwidth for each DIMM (4GB, 8GB, and 16GB) and then total for a server (i.e., Supermicro etc.) partially or fully loaded utilizing eighteen, 16GB 2vRank HyperCloud DIMMs (288GB DRAM)?

  225. Does anyone here have access to the settlement information concerning the Texas Instruments case that was just announced?
    Thanks

  226. NLST vs. TXN settled.

    Problem is can’t get access to the court records – although if it is a settlement, that may not be in the court records.

    But if someone can get access (goto court basically, since that court is not on PACER), that would be nice as it would shed light on the goings on at JEDEC and how TXN (being fingered as the ORIGINAL leaker of info to JEDEC) may have been involved.

    http://www.prnewswire.com/news-releases/netlist-settles-lawsuit-with-texas-instruments-93915489.html
    Netlist Settles Lawsuit With Texas Instruments

    IRVINE, Calif., May 17 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that it has reached a settlement in the misappropriation of trade secrets and breach of contract lawsuit against Texas Instruments, Incorporated. The settlement resolves a dispute between the two companies concerning the use of proprietary memory modules and other related technology.
    “We are pleased to have successfully resolved this case. Netlist remains committed to protecting its portfolio of intellectual property,” said C.K. Hong, President and CEO of Netlist.

  227. Examining the language in the PR:

    quote:
    IRVINE, Calif., May 17 /PRNewswire-FirstCall/ — Netlist, Inc. (Nasdaq: NLST) today announced that it has reached a settlement in the misappropriation of trade secrets and breach of contract lawsuit against Texas Instruments, Incorporated. The settlement resolves a dispute between the two companies concerning the use of proprietary memory modules and other related technology.

    quote:
    “We are pleased to have successfully resolved this case. Netlist remains committed to protecting its portfolio of intellectual property,” said C.K. Hong, President and CEO of Netlist.

    Here we have CEO Hong issue the PR – possibly suggesting that NLST is in the drivers seat regarding who would put out the PR.

    It has NLST speaking, and it does not have TXN representative speaking (if there was a TXN concession there would be nothing to tout about).

    Plus the reiteration of defence of NLST IP.

    Language suggests that this is to the satisfaction of both parties – and esp. NLST.

    From court docket info, the settlement happened 5/10/2010.
    Court-mandated settlement conference was set for 09/29/2010.
    And jury trial for 10/4/2010.

    See docket info for NLST vs. TXN:

    http://www.sccaseinfo.org/pa6.asp?full_case_number=1-08-CV-127991

    10/4/2010 08:45AM 01 CV Jury Trial – Long Cause Vacated; dismissal filed C 05/10/10 None None None

    9/29/2010 01:30PM 01 CV Settlement Conf – Jury Vacated; dismissal filed C 05/10/10 02/10/10 None None

    0038-000 Cv Ntc:Settlement 05/10/2010 None 05/11/2010 For: Netlist, Inc. / PLT

    0037-000 Cv Req:Dismissal, Entire W/Prej 05/10/2010 None 05/11/2010 For: Netlist, Inc. / PLT
    Against: Texas Instruments, Incorporated / DEF

    So this was an early settlement – if TXN is quiet, this would suggest they concded something which is nothing to crow about. And if this is so, then an early settlement suggests TXN realized that delaying would not help TXN.

    Note that like MetaRAM (which was making buffer chips, plus had some IP), TXN is also making some buffer chips (see above). But TXN is also accused of leaking NLST info to JEDEC.

    What would a concession look like ? Would TXN concede IP (MetaRAM conceded IP to NLST, and promised to not let it’s IP be used against NLST) ? Or would TXN concede they will abandon buffer chip manufacture within TXN ?

    What would TXN concede regarding leakage to JEDEC ?

    What would be interesting is if TXN starts licensing NLST IP. Would be an indicator of start of fall of dominoes. Since TXN is a part of JEDEC – alleged by NLST to have leaked NLST IP to JEDEC – which may later have been used by Intel in that demo at JEDEC, which prompted NLST to inform JEDEC that this technology falls awry of NLST IP. Which in turn prompted the letter from JEDEC to members – including GOOG. Which GOOG in turn chose to ignore and continued manufacture until warned by NLST – in response to which GOOG went to court to prevent stoppage of it’s servers.

    If TXN stops making buffer chips – that would impact negatively on Inphi (in NLST vs. Inphi) litigation as well.

    We may or may not hear full details on the settlement. Settlement with MetaRAM involved a company in bankruptcy, while TXN has it’s reputation to protect as well.

    Question is, if NLST is going to look the other way on TXN leakage to JEDEC, what is TXN going to promise in return ?

    Just for comparison, here is a recap of the NLST vs. MetaRAM litigation settlement PR at time of settlement:

    http://www.prnewswire.com/news-releases/netlist-announces-settlement-of-patent-infringement-lawsuits-with-metaram-82948382.html
    Netlist Announces Settlement of Patent Infringement Lawsuits With MetaRAM
    Press Release Source: Netlist, Inc. On Thursday January 28, 2010, 1:25 pm EST

    quote:
    Under the terms of the settlement, filed in U.S. District Courts in Delaware and Northern California, MetaRAM will not sell, offer to sell, release, or commercialize the MetaRAM DDR3 controllers in the U.S. or outside the U.S. Netlist contended that MetaRAM’s DDR3 controllers and memory modules incorporating such controllers infringed its U.S. Patent No. 7,289,386, entitled “Memory Module Decoder.” A provision in the settlement protects Netlist if another company purchases MetaRAM’s patent and attempts to seek action against Netlist in the future.
    “We are pleased to have successfully resolved this case,” said C.K. Hong, President and CEO of Netlist. “As the pioneer of this technology, the results of this settlement clearly underscore Netlist’s fundamental patent and product leadership. Netlist’s HyperCloud product-line embodies this foundational technology and Netlist remains committed to protecting its portfolio of intellectual property.”

  228. Does this mean that TXN has licensed the technology to produce high memory chips?
    In the upcoming generations of Intel processor the limitation on memory is reduced- a 4 socket Intel Xeon with 1024 GB of memory is in the labs of major server vendors.
    This does not mean that there is no need for NetList or other memory expansion technology- it is just that the need for that technology is reduced.

  229. quote:
    Does this mean that TXN has licensed the technology to produce high memory chips?
    In the upcoming generations of Intel processor the limitation on memory is reduced- a 4 socket Intel Xeon with 1024 GB of memory is in the labs of major server vendors.

    Not clear WHAT TXN could offer to atone for leakage to JEDEC. It would seem like licensing would be one thing – would also be a signal to JEDEC.

    Or maybe they agree to having leaked – which would be useful in NLST vs. JEDEC – which practically speaking impacts NLST vs. GOOG (since GOOG is using JEDEC “Mode C” proposed standard).

    NLST has a first to market advantage which is available now – with buildout for cloud computing it is a good time to be offering this product.

    However NLST HyperCloud has other advantages – notably being the advantage of using “lower dollar per bit” memory chips to emulate “higher dollar per bit” memory chips.

    Eventually new technology will arrive – server motherboards will change – but there is an economic value to have memory solutions that work NOW – and with existing low-priced servers.

  230. Mode C has a limited lifespan going forward. Netlist doesn’t look like a one trick pony. The fact that Netlist figured out how to increase the address range on current motherboards without bios changes is amazing, and Google and others thought it was useful. HyperCloud involves additional Netlist IP that should be very useful in designing memory modules for the next generation mother board. IP needed to manage cost, space, speed, energy, and thermal issues will out live the current Mode C requirement for expanded memory addressing. HyperCloud is a great prototype demonstrating how to engineer high capacity/performance modules even as the need for Mode C diminishes. Netlist is positioning itself to become a major industry player. They must be successful in protecting their IP and executing properly. It seems they were denied an opportunity to grow by Google’s rebuff. I would expect that the settlement would address that issue.

  231. 10-Q filed by NLST:
    http://www.secinfo.com/d11MXs.rSe8.htm

    On the NLST vs. TXN litigation settlement:

    quote:
    Trade Secret Claim
    On November 18, 2008, the Company filed a claim for trade secret misappropriation against Texas Instruments (“TI”) in Santa Clara County Superior Court, based on TI’s disclosure of confidential Company materials to the JEDEC standard-setting body. On May 7, 2010, the parties entered into a settlement agreement. The court dismissed the case with prejudice.

    As stated in previous post above:
    quote:
    From court docket info, the settlement happened 5/10/2010.
    Court-mandated settlement conference was set for 09/29/2010.
    And jury trial for 10/4/2010.

    The 10-Q now reveals they had agreed to settle May 7, 2010.

  232. Superficially, HyperCloud seems to offer the advantages of JEDEC “Mode C” proposed standard plus the plug and play and requiring no updates to BIOS.

    In addition it brings with it the integrated advantages of the “embedded passives” and NLST’s thermal IP (even heating to reduce thermal disparity so memory modules perform within tighter tolerances).

    From the recent 10-Q filed by NLST:
    http://www.secinfo.com/d11MXs.rSe8.htm

    A good explanation of HyperCloud – pointing out that the “no BIOS changes” results in having no impact on OEM’s product cycles:

    quote:
    Our HyperCloud™ products can be installed in servers without the need for a bios change. As such, their design and anticipated sales launch is not dependent on the design plans or product cycle of our OEM customers. Alternatively, when developing custom modules for an equipment product launch, we engage with our OEM customers from the earliest stages of new product definition, providing us unique insight into their full range of system architecture and performance requirements. This close collaboration has also allowed us to develop a significant level of systems expertise. We leverage a portfolio of proprietary technologies and design techniques, including efficient planar design, alternative packaging techniques and custom semiconductor logic, to deliver memory subsystems with high speed, capacity and signal integrity, small form factor, attractive thermal characteristics and low cost per bit.

  233. “Superficially, HyperCloud seems to offer the advantages of JEDEC “Mode C” proposed standard plus the plug and play and requiring no updates to BIOS.”

    Jedec Mode C is a Netlist invention. That is the crux of the lawsuit.

  234. quote:
    Jedec Mode C is a Netlist invention. That is the crux of the lawsuit.

    Yes, basically that it violates NLST IP. However that does not mean that it is better than HyperCloud. In fact HyperCloud – as qualified by SuperMicro (SMCI) – is a finished product which includes the advantages of JEDEC “Mode C” proposed standard PLUS the advantage of plug and play and no BIOS updates required.

  235. What will be the hint before there is a big settlement?
    Will there be a big settlement?

  236. Hi MemoryGeek,

    It’s hard to say if there will be a settlement, or if we’ll get some kind of hint beforehand if there is one.

    Many lawsuits do settle, and many often on the courthouse steps in the moments before a trial.

  237. NLST has asked court for “summary judgement” in GOOG vs. NLST (case GOOG brought to protect it’s servers from being shutdown etc.) on the basis of some exhibits, mainly testimony from JEDEC attorney and GOOG employee.

    http://en.wikipedia.org/wiki/Summary_jud
    Summary judgment

  238. Easy to understand explanation of “summary judgement”:

    http://answers.yahoo.com/question/index?qid=20071028194145AAxtvtV

    quote:
    Best Answer – Chosen by Asker

    A summary judgment is a decision by a judge that decides the case early because there are no facts in dispute. The judge’s decision means that the case never goes to trial.

    To ask for a summary judgment from a judge, you must do the following:
    1. File a Motion for Summary Judgment asking the judge to rule in your favor. It must be filed pretty soon after discovery is complete.
    2. In the Motion for Summary Judgment, you must submit case law and facts that support your Motion for SUmmary Judgment.
    3. Generally, you won’t win a summary judgment motion unless there are NO facts in dispute – meaning the only issue outstanding is an issue of law.

    For example, let’s say you and I are neighbors. Your trees were blocking my view – so I cut them down. You sue me to get the trees replaced. Both of us agree that this is what happened (that the facts are not in dispute).

    Since we agree on the facts, the only outstanding issue is what the law says.

    Why have a jury trial when juries ONLY decide facts – not law. Judges decide the law; therefore, the above case is RIPE for a decision.

    The law states that I don’t have the right to trespass on your property and damage your property; therefore, if you file a motion for summary judgment the judge will find in your favor.

    http://en.wikipedia.org/wiki/Jury_trial
    Jury trial

    A jury trial (or trial by jury) is a legal proceeding in which a jury either makes a decision or makes findings of fact which are then applied by a judge. It is distinguished from a bench trial, in which a judge or panel of judges make all decisions.

    Juries usually weigh the evidence and testimony to determine questions of fact, while judges usually rule on questions of law, …

    Another explanation:

    http://www.legalandlit.ca/summaries/first/civpro/civpro_farrow_w07.doc
    14 Stages of a Lawsuit -CHECKLIST

    quote:

    7. Disposition Without Trial – most cases don’t get to trial (only 1-3 percent get to trial)

    4 different possibilities:

    1. negotiated settlement – the most common resolution

    mediated settlement – mediation is assisted negotiation with the assistance of a third party – a mediator helps facilitate communicate b/w the parties – there is now a rule requiring mandatory mediation – reduces costs and helps achieve a resolution

    2. motion for judgment – if a party has made admissions through the oral examination for discovery process which entitle the opponent to succeed, you can move for judgment b/c the evidence sworn under oath on discovery entitles a win w/o a trial

    3. moving for summary judgment – when one party in an action can demonstrate to the court that there is no triable issue in the case – the difference w/ motion for judgment is that there is no possible evidence to defeat your claim

    4. striking a pleading by using R 25.11

    5. default judgment – the D has failed to deliver a statement of defence – if you don’t respond to a statement of claim you are deemed to admit the allegations in the statement of claim – difficult motion to win b/c the court is being asked to do something quite significant, which is an ex parte (one party) – elements of the action still need to be proven, ie. Serving affidavit, damages, et

    WHEN IS R20 USED (anytime before trial)
    o if after discovery, you look at their evidence and conclude that they cannot back up their pleading, so a Rule 20 motion allows for early adjudication to have matter resolved
    o Efficient on party resources and time and judicial resources
    o It avoids a trial or shortens the proceeding on satisfying a court that there is no need for a trial because there is no genuine issue of fact requiring one

  239. Thanks, netlist.

    Thanks a fairly thorough explanation of a summary judgment.

    If I were to simplify it, I might say that a motion for summary judgment is a ruling on the law involved in a case when there aren’t any facts left to dispute and argue in front of a jury, and only a legal interpretation of the law involved is necessary for a decision on a case.

  240. Yes, basically it seems a jury trial is to establish “the truth” (or facts).

    While the judge usually rules on the facts – or in some cases the jury rules on the facts as well as the judgement to varying degrees.

    The suggestion being that once the facts are clear, then the jury is no longer required, and one side may ask the judge that “discovery has unveiled facts which are not in dispute any longer”. NLST cites JEDEC lawyer testimony and GOOG’s employee Robert Sprinkle (although some of their testimony is blacked out because it is confidential – attorney’s eyes only).

  241. Good points.

    Many motions for summary judgment fail because there are still some facts in dispute that haven’t been uncovered in discovery to the satisfaction of the judge deciding the motion, but it’s often worth filing a motion for summary judgment to avoid the expense of a trial, and the time that it might take to have that trial.

  242. Do you know if a summary judgement request usually leads to the judge giving time to the other party to settle ?

    That is, if the judge sees things are “not looking good” they can say that in the conferences they have with the two sides’ lawyers, and urge them to settle.

    In such a case, GOOG would be hard pressed to settle fast – since with that climate they would be willing to pay MORE than what a judge might order. Since with a judge’s order, GOOG not only has to pay, but also suffers:

    – security of GOOG servers in question (could harm GOOG share price)

    – GOOG may have to use NLST products ANYWAY if they are the only ones available in this area

    – would not help “do no evil” mantra (and GOOG’s entire business model of intrusive “big brother-like” behavior is tempered by the perception that they “are good”)

    – would provide ammunition to competitors to use GOOG prior history of subterfuge

    So in fact settling is a significant advantage for GOOG – I don’t know if that means that there has to be a premium there for that therefore.

  243. I am not a patent lawyer. But usually in patent law cases, the defendant claims that they have not infringed on the patents. This has not been done by Google. They seem to say that they do not know if they have infringed. This would be somewhat acceptable if they had done some lab experiments. But they have actually gone and hired vendors to construct the ASIC. Therefore, they have infringed- and since they had notice through various means, the infringement may be considered willful.

  244. Hi Netlist,

    Usually, an opposing party has time to file a response to a motion for summary judgment, and sometimes the possibility of asking for more time in some instances. Courts do tend to like the possibility that a case might settle before reaching a trial as long as it doesn’t appear that a party is attempting to delay only to make a case drag on.

  245. Hello Netlist,

    Is it probable that Google is having the “Mode C” memory modules manufactured and installed while litigation goes on?

  246. quote:
    Is it probable that Google is having the “Mode C” memory modules manufactured and installed while litigation goes on?

    There has been no indication that GOOG has stopped what it was doing.

    GOOG has indicated that the GOOG server they provided to NLST lawyers was representative of the infringing servers.

    That does not indicate:

    – if they are continuing to do so.

    – if they are a minority or a significant portion (10% ?) of the current GOOG server set.

    – if these servers are part of GOOG Caffeine project (GOOG’s effort at near real-time search which is probably even more reliant on in-memory techniques, thus requiring more memory than earlier servers). GOOG Caffeine has recently gone mainstream.

    It seems NLST is claiming GOOG manufactured “hundreds of thousands of computer memory modules” – as earlier posted:

    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-267873

    quote:
    ——–
    From docket 133-2 (which outlines earlier arguments – which eventually led the court to refuse GOOG arguments and grant NLST request for examining GOOG server):

    —-
    (pg. 2 )
    This is a patent infringement case. Netlist owns IP relating to computer memory modules, and it shared some of those inventions under NDA with Google while Netlist’s patents were pending. Google turned down a business relationship with Netlist. Netlist alleges that Google then went on to manufacture hundreds of thousands of computer memory modules using the Netlist technology, and that it now uses those memory modules in server computers at Google data centers.1

    ——–

  247. While NLST says GOOG maybe using this memory in “hundreds of thousands of servers”, I wonder if GOOG actually uses so much memory.

    Some articles have suggested GOOG used to buy bargain basement priced memory.

    But maybe the move to Caffeine and scaling have pushed GOOG to use higher-memory computers.

    After all, GOOG WAS interested enough to start it’s own memory module division.

    But it still remains unclear if all GOOG servers run at max memory loading or just a few.

    However, the effects of memory loading maybe apparent at reasonably sized memory levels as well (i.e. don’t have to load to 384GB but maybe apparent as low as 24GB or something ?) and if so availability of HyperCloud-like solutions impacts many more of GOOG servers.

  248. Google Caffeine seems to include these things as well:

    – a rewrite of the GFS (Google File System 2 or GFS2)
    – doing more stuff “in-memory” (i.e. RAM) including databases all in memory etc. (what is not clear is what percentage of it’s servers would be involved with higher memory use applications – however GOOG interest in making and using it’s own memory may suggest some interest from GOOG in this direction)

    NLST CEO Hong has mentioned all these uses for NLST HyperCloud:
    – government labs with HPC (High Performance Computing) applications (like Viglen caters to which recently qualified HyperCloud though it may have been because of SuperMicro qualifying – since Viglen sells SuperMicro (SMCI) stuff).
    – search applications needing to do things more in memory (RAM)
    – database applications where the whole database is in memory (RAM)
    – video delivery – supposedly huge user of memory

    It is possible GFS2 itself shifts stuff to in-memory use:

    http://www.channelregister.co.uk/2009/08/12/google_file_system_part_deux/
    or
    http://www.channelregister.co.uk/2009/08/12/google_file_system_part_deux/print.html
    Google File System II: Dawn of the Multiplying Master Nodes
    A sequel two years in the making
    By Cade Metz in San Francisco
    Posted in Enterprise, 12th August 2009 02:12 GMT

    quote:
    But GFS supports some applications better than others. Designed for batch-oriented applications such as web crawling and indexing, it’s all wrong for applications like Gmail or YouTube, meant to serve data to the world’s population in near real-time.

    “High sustained bandwidth is more important than low latency,” read the original GPS research paper. “Most of our target applications place a premium on processing data in bulk at a high rate, while few have stringent response-time requirements for an individual read and write.” But this has changed over the past ten years – to say the least – and though Google has worked to build its public-facing apps so that they minimize the shortcomings of GFS, Quinlan and company are now building a new file system from scratch.

    GFS dovetails well with MapReduce, Google’s distributed data-crunching platform. But it seems that Google has jumped through more than a few hoops to build BigTable, its (near) real-time distributed database. And nowadays, BigTable is taking more of the load.

    “Our user base has definitely migrated from being a MapReduce-based world to more of an interactive world that relies on things such as BigTable. Gmail is an obvious example of that. Videos aren’t quite as bad where GFS is concerned because you get to stream data, meaning you can buffer. Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point.”

    From the comments section:
    http://forums.channelregister.co.uk/forum/1/2009/08/12/google_file_system_part_deux/

    As for “other tools”; Lustre was invented as a local network filesystem. GFS was invented to handle thousands of tasks all reading & writing as fast as they could all day every day. The indexing pipeline; download the internet, index it, run a few mapreduces over it to mark down spammy sites, crappy sites, duplicate sites, dead sites etc. and then compress it so it could be shipped all over the place. As Sean says in his interview, these days ‘routine use’ is dozens of petabytes of data that has to be randomly accessed – as in, the metadata has to stay in RAM.

    “Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point.”

  249. Hi Netlist,

    Google has had a long history of attempting to do as much with software as possible while using as many inexpensive computers as they can. The original Google File System focused upon that approach, and the GFS 2 system, which was supposedly developed in 2007-2009, worked hard to distribute the processes involved over more computers, as well as making more processes happen in memory rather than touching disks as much. So more memory on each machine might help.

    Google is supposedly working on a GFS 3, but I haven’t heard too much in the way of details. I would suspect that it would still focus upon spreading out processes on as many inexpensive computers as possible, though.

  250. The relevance for NLST investors however is to estimate the number of servers that maybe using high memory (sufficient that memory-loading causes slowdown – in which case the NLST HyperCloud or JEDEC “Mode C” proposed standard related memory modules that GOOG was manufacturing become useful).

    If these are common needs for most of their servers (not just the main or central node ones) then there maybe “hundreds of thousands of servers” which are infringing NLST IP.

    If they are only a few node servers, that number would be less.

    However the fact that GOOG went to great lengths to manufacture the JEDEC “Mode C” proposed standard modules suggests they needed these crucially (and could not rely on JEDEC or the memory module makers to produce these on time). It also suggests that the need was not for a few servers alone.

    I don’t know enough about GFS2 (or GFS3 as you suggest) – but the suggestion is that more is being done in-memory (either indexing or database type stuff is in-memory). The question (of interest to NLST investors) is however still how many servers have these high-memory requirements.

    Thanks for your comments.

  251. The problem complicating the above analysis is that as GOOG moves to GFS2, does that distributing out of the main nodes’ work to many servers entail using much more memory per server still ?

    Or is the move part of a strategy that knows it CAN increase memory per server (earlier node servers may already have been running at high memory) – which is what allows some of the faster (in-memory type) methods to work.

    Or are the servers still able to get by with measly amounts of memory – since the tasks they do do not require that much (per server) with the new GFS2 structure.

  252. Hi Netlist,

    Part of the change to GFS 2 involves smaller file sizes as well, which should help reduce the amount of memory required per server.

    See: Google File System II: Dawn of the Multiplying Master Nodes

    A snippet:

    And with files shrunk to 1MB, Quinlan argues, you have more room to accommodate another ten years of change. “My gut feeling is that if you design for an average 1MB file size, then that should provide for a much larger class of things than does a design that assumes a 64MB average file size. Ideally, you would like to imagine a system that goes all the way down to much smaller file sizes, but 1MB seems a reasonable compromise in our environment.

  253. Docket 166-4 (GOOG vs. NLST) has a few pages from the transcript of the claims construction conference held Nov 12, 2009 after which court overturned GOOG’s reading of the claims construction. The court also disallowed GOOG’s request to include the “patent prosecution history” for NLST patents at the USPTO.

    Here is a more complete selection from the transcript.

    Listen as GOOG tries to argue (unsuccessfully) for their version of the language to describe the situation.

    As many who have read the court docs can see as well – the judge also sees that GOOG is trying to use overexpansive language to actually change what the problem is – from something that infringes NLST IP to something that is more general or separate (i.e. doesn’t look like NLST IP).

    This comment by judge is relevant:

    quote:
    asked you, ’cause what i’m — what I am perceiving is — is an overbreadth in terms of incorporating in the construction of the phrase itself the reason or the motivation for wanting to have the phrase included in the first place.

    GOOG’s claims construction arguments (and it’s demand for patent prosecution history for NLST patents be included) were discarded by the court (Nov 12, 2009 hearing on claims construction etc.).

    Note: Pollack is attorney for GOOG. Pruetz for NLST.

    From docket 166-4 (GOOG vs. NLST):

    —-
    (pg. 3 – pg. “165” of transcript)

    .. the computer system that the memory module has a certain thing; in this case, a number of ranks.

    Then the computer system generates and transmits a set of output control systems corresponding to the two ranks. It’s told by the SDP that it has one rank. It generates control signals that correpond to the one rank, and then the logic device control generates signals that correspond to the two ranks that are actually there.

    The Court: Okay. Where are you looking at “generating” ?

    Mr. Pollack: Okay. So for example, if the —

    The Court: Where is the generation language that you just been reading ?

    Mr. Pollack: The logic element receives —

    The Court: Right.

    Mr. Pollack: — the set of input control signals corresponding to the signal rank from the computer systemn’s memory controller.

    The Court: I know. But then you’lre using the word “generating” —

    Mr. Pollack: Oh, and then I was reading what the modules does with them. The logic device takes those and generates the ones that match up with the actual number of ranks, which is two.

    In each case, where the “corresponding to” language

    —-

    —-
    (pg. 3 – pg. “166” of transcript)

    comes in, it’s preceded by the discussion of the SPD device characterizing to the computer system the different number of devices.

    So when — when the claim says that the signals received from the computer system correspond to this second number of devices, it’s because the computer system’s been instructed that those — that that’s what’s there. And that’s why you need a logic device. The whole point of the logic device would go away, as you pointed out earlier, if the computer systemn knew exactly how many devices were on there.

    The Court: But are we — in construing the phrase, are we — are we making judgements as to why as opposed to what ? I mean, you’re saying it corresponds to a — it’s because of this that it corresponds to.

    In terms of the construction of the phrase itself, why would we build into the construction of what the phrase means “corresponding to” the reason why the — it has determined that it’s appropriate to “correspond to” ?

    Mr. Pollack: Well, what it means for the — the signals, right – the signals are comin gfrom the computer system, right ? The corresponding language is characterizing those signals.

    The Court: But —

    Mr. Pollack: Right ?

    The Court: You don’t understand the question that I

    —-

    —-
    (pg. 3 – pg. “167” of transcript)

    asked you, ’cause what i’m — what I am perceiving is — is an overbreadth in terms of incorporating in the construction of the phrase itself the reason or the motivation for wanting to have the phrase included in the first place.

    And — and my concern is that why is that appropriate to — to — to incorporate in the definition of what the phrase means the reason why the phrase is there and the reason why there’s a — been a decision to — to make it in that way.

    Mr. Pollack: Because it — I don’t see it as incorporating the reason. It’s the why —

    The Court: You said “it’s because.”

    Mr. Pollack: The — the characterization — the claim characterizes the signals as corresponding to something.

    The Court: Yeah, but what do we care whether the computer understands it or not, as long as it corresponds ?

    Mr. Pollack: Well, the only way it can correspond —

    The Court: Well I don’t know whether that’s true or not.

    Mr. Pollack: The —

    The Court: I’m trying to understand —

    (simultaneous colloquy)

    The Court: — the fact of the correspondence. Now you’re saying that’s the only way that the fact of the correspondence can occur. I don’t know whether it can or not. But we can certainly accomodate the fact of the corresponding.

    —-
    (pg. 3 – pg. “168” of transcript)

    We can define what “to correspond” means.

    Now you might be right that it will only be actualized if the computer understands whatever, or maybe not.

    But — but we will always be able to determine whether or not the correspondence is transpiring, right ?

    Mr. Pollack: If the signals that are comin from the computer system —

    The Court: You don’t know how to answer the questions “yes” or “no.”

    Mr. Pollack: We can —

    The Court: Well, then why don’t you just answer the question ? I mean, because it doesn’t do much for your credibility when you just side-step questions that obviously you can answer and you choose not to. I ask the questions, and you give the answers. And then after that, if we have time to discuss what you want to discuss, we’ll discuss.

    I’m asking questions. I’m trying to advance this discussion and narrow down to what really is the essence of the dispute here.

    Mr. Pollack: I apologize, your honor. I’m not sure I understood your question. That’s why I was tryin gto rephrase it to — to understand what you meant.

    The Court: Okay, then tell me that next time.

    Mr. Pollack: If what you mean is that you can tell — that the signals themselves correspond to a thing based ..

    —-
    (pg. 4 – pg. “185” of transcript)

    Mr. Pollack: The Court —
    The Court: — What you are asking me — What you’ve just proposed, you know, “matching up” — the language you proposed for the two words that you agree are the only ones that you all dispute means exactly the same thing as the two words that are here. I mean, I understand that you all know the context and you have a lot of other — a lot of other issues that you’re concerned about.

    But from my part, listening to you all and listening to the dispute, as you characterize it, and listening to your proposals as you’re proposing to me, the language that you are suggestin to substitute in it’s — instead of the two words that you dispute mean exactly the same thing that these two words mean. And so I can only conclude that there’s no substantive dispute.

    Mr. Pollack: Actually, your honor, I’m sorry. I wasn’t just suggesting that you just replace “corresponding to” with “match up” —

    The Court: Well, you said “conform”. “Conform” means the same thing as “corresponding to.”

    Mr. Pollack: I was actually attempting to modify netlist’s proposed construction in that —

    The Court: What ?

    Mr. Pollack: — That’s — What I was — when we were talking about before —

    The Court: Okay, so what language did you suggest to — to substitute the words — because we’re — we’re down to “corresponding to”.

    Mr. Pollack: Right. And I suggested that where they — if we said that the control —

    The Court: No, what —

    (simultaneous colloquy)

    The Court: Excuse me. What — what words are you suggesting that I substitute for “corresponding to” ?

    Mr. Pollack: “Are configured to use.”

    The Court: So you’re saying, “the set of input control signals are configured to use.”

    Mr. Pollack: A second number of devices.

    The Court: I’m — no that doesn’t — no. Okay. I’m not going to construe this term. I don’t think it requires construction based on my understanding of what you all are disputing.

    And — and to suggest that control signals, given your — given your — the construction of what a signal is, is configuring control signals is just nonsensical to me.

    Mr. Pollack: It — it would mean that you choose — the set of signals are chosen, are configured, are designed, to match up to — to the second number of devices.

    Ms. Pruetz: Your hobor, I just don’t think there’s

    —-

    —-
    (pg. 4 – pg. “187” of transcript)

    anyway that you can define “signal” to be configured as he’s —

    (off-the-record discussion)

    Mr. Pollack: And it’s a set of signals.

    The Court: What’s your definition ?

    (off-the-record discussion)

    Mr. Pollack: I’ve got it. We’ve construed it as “varying electrical impulse that conveys information from one point to another.”

    The Court: Excuse me ?

    Mr. Pollack: “A signal is a varying electrical impulse that conveys information from one point to another.”

    The Court: Right. And so you’re suggesting configuring —

    Mr. Pollack: A set of signals may be configured.

    Ms. Pruetz: Oh, I mean, that — that is really a discussion for some expets, but I can’t quite imagine using “configured” that way.

    The Court: And I don’t — it doesn’t — well, it may be — it may be something that is possible, but it’s not readily apparently to me. And given the way this — this proposed phrase reads, it’s certainly a lot clearer in the phrase as it’s presented now than it would be if I included that language in that fashion.

    (off-the-record discussion)

    The Court: Okay. So the last one ins the — well, I

    —-

    —-
    (pg. 4 – pg. “188” of transcript)

    guess this — probably has some bearing on the last one. “The first command signal corresponding to the second number of ranks.” And the only difference here is three words by Google “generated by” — or no, “generated by two” command — and then netlist says “received from”, which is configured to utilize.

    Okay. Any — any comments ?

    Ms. Pruetz: This is really the same situation we had before. I mean, it’s the first command sigal that’s corresponding to the second number of memory ranks, which is also the smaller number of memory ranks. It’s just using “ranks” instead of “devices”.

    The Court: Yeah.

    Ms. Pruetz: So I think our argument would be the same, that it’s clear the way it is.

    The Court: Counsel ?

    Mr. Pollack: Again, your honor, with the — with the understanding that the command sigals, the way they’re operated on is different. And what we’ve — that’s why we used “command” again to emphasize that what we’re doing here is the computer system’s commanding with these command signals a memory module having a second number of ranks.

    That’s the — the command sigals also are intended and generated to work with the computer — the second number of ranks, which is different from the number of ranks that’s actually there.

    —-

  254. Thanks for sharing these parts of the transcript, netlist.

    Lawsuits involving highly technical issues have to be hard on judges, who tend to be pretty well versed in legal issues, but may not have the technical background to understand high tech and the intricacies of things like memory modules. I know in some cases involved in technical issues like this, a court will appoint someone who has both a legal and technical background as a special master to explore those topics and present them to the court in a way that a judge and/or a jury might understand. It does sound like there’s some confusion here on the part of the judge, but it also sounds like he’s trying to be very careful to understand what’s being described.

  255. netlist,

    Can Goog drop the Goog/Netlst case at this point wheater or not the judge grants the summary judgement?

  256. spencity

    Are you asking somebody you picked a fight with to stop beating on you? HAHAHA!

    I wouldn’t until you were dead.

  257. Sorry Spencity,

    That was a little harsh.

    I don’t think you can walk away from a case you started, and hope its going to be okay.

    You will loose!

  258. quote:
    Can Goog drop the Goog/Netlst case at this point wheater or not the judge grants the summary judgement?

    I don’t know.

    My guess is if you file a suit, you should be able to drop it.

    Unless there is some complication with NLST being counter-claimant and GOOG as counter-defendant as well in GOOG vs. NLST.

    One would think if GOOG was dropping the case, in most cases there would be some concession to be extracted from the other party – and then that would be a “settlement”.

  259. That’s OK fallguy. My train of thought is that Google filed this case as the plaintiff in order to control the litigation process, and not because they have case. It may be possible that Google has an indirect advantage in delaying the out come of any litigation by keeping some of Netlist’s IP questionable to the industry as long as possible. Google could delay a clear outcome significantly if Netlist has to wait for the Netlit/Goog case to run its coarse. I would think that Netlist’s pricing power and industry adaptation of HyperCloud will be effected by the outcome of any settlement, and a delay of a settlement, may cause a hesitation by some customers accordingly. That would give Google more time to take advantage of their current lead in technology, which could be worth more than the cost of a settlement. Just playing the “what if game”.

  260. There is still a possibility of a settlement, but at some point a settlement needs to be approved by the judge in the case, even if the judge’s input is just rubberstamping a settlement agreement.

    I think you make a good point spencity, about Google having to carefully weigh the cost of continuing to pursue litigation, against what the cost of a settlement might be.

    I remember roughly what the present Delaware Chancellor said involving a big civil law suit in Delaware’s Chancery Court from a year or so ago to both parties after their presentations were finished, where he told them that it would probably be at least 4 or 5 weeks before he came out with a ruling, and he urged them to continue considering a settlement in the case. He said something like – “usually most people don’t like my decisions – on both sides of a case.”

  261. You can’t just drop a case you have started if the other party has counter claimed because their counter claim would still stand. In these cases though, a settlement is normally best for all concerned.

  262. Hi Simon,

    Good point. Just to amend that a little, you can drop your claim in a case even if the other party has filed a counter claim, but as you note, the case would continue to resolve the counterclaim.

  263. anybody still active on this board?

    It appears everything is on hold until the USPTO review is completed. This is like watching grass grow. Oh wait! my grass actually grows a heck of a lot faster than this.

  264. Hi Fallguy,

    I’m still pretty active on this blog with new posts. Didn’t expect this post to grow to almost 280 comments.

    I’m still puzzled why Google purchased Metaram in the first place, concerned about what might happen in the lawsuit, and what it’s implications may be.

    I’m not an investor in any of the companies involved, but I’m still very much interested in the outcome.

    The USPTO isn’t involved at this point, as far as I know.

  265. Hi Bill,

    Inphi has successfully filed a challenge with the USPTO regarding the 386 patent. GOOG file and was granted a stay in the GOOG vs NLST case pending determination by the UPSTO. I believe an stay was also filed and issued in the NLST vs GOOG case. I am pretty sure I had read the docs concerning this but don’t seem to be able to access them any more. For the life of me I don’t know why the court wouldn grant a stay in one case and not the other. They maybe separate cases but they are fighting over the same IP.

    Now SMOD has challenged the 386 patent:

    http://biz.yahoo.com/iw/101025/0677140.html?.v=1

    Does anyone have any thoughts of this?

    Cheers.

  266. Hi Fallguy,

    It’s interesting to see these challenges to Netlist coming out. I’m not quite sure what to make of both the Inphi and the SMOD actions, but I think I’ll be spending some time learning more if I can. How do those impact what Netlist is doing now? I’m not sure.

  267. Thanks, netlist.

    Some interesting details there, especially (to me) the section about a patent reexamination of the ‘912 patent requested from SMOD and Google, with a possible decision as to whether that reexamination will be granted or denied expected in January.

  268. Reposting Q3 2010 earnings call transcript (not exact) – with some corrections.

    http://www.netlist.com/investors/investors.html
    Netlist Third Quarter, Nine-Month Results Conference Call
    Monday, November 15 at 5:00 pm ET

    “http://viavid.net/dce.aspx?sid=00007D25″ (need to log in)
    Netlist – 2010 Third Quarter & Nine-Month Results Conference Call
    Nov 15, 2010 05:00 PM (ET)

    NLST Q3 2010 earnings call transcript (not exact)

    Chuck Hong – CEO
    Gail Sasaki – CFO

    Chuck Hong:

    Good afternoon Matt.

    Thank you all for joining us today to discuss the Q3 results. As you saw from our release earlier today, we had another strong with 64% growth in revenue over last year’s Q3 revenues and a 93% increase in gross profit year over year.

    Virtually all of that growth came from what we refer to as our “baseline business” which is a combination of the NetVault family used in RAID controller subsystems as well as sales of other memory modules utilized in data centers and industrial applications.

    Since our last call we have been very busy increasing the marketing effort for the HyperCloud technology. At the recent InterOp trade show in NY, we demonstrated how HyperCloud technology makes it possible to run 100 virtual machines on a single fully loaded 24 (memory) slot server with 384GB of memory.

    That demonstration showcased how companies can increase memory capacity and maximize server utilization as a support growing (?) demand for virtualization and other data intensive applications.

    During the quarter we also conducted a series of simulations that show performance enhancements made possible by HyperCloud in overcoming existing memory constraints and accelerating the performance time of popular simulation applications by 21%.

    Maybe the most interesting demonstration will occur this week at the Supercomputing show in New Orleans where we announced this morning VMWare certified Quad-V servers incorporating HyperCloud will be running simulations on virtualization applications in the cloud.

    The MDS Micro .. approached us because they were in need of a technology that could effectively support rapidly growing memory requirements. They indicated that when they evaluated memory solutions, NLST was the clear winner for our ability to help increase server utility in a cloud matrix.

    Certification of HyperCloud memory modules on MDS Micro Quad-V for the cloud matrix increased server performance to it’s full potential and enables up to 768GB of memory running at 1.3GHz (1333MHz) per second (?).

    VMWare engineers acknowledge that the uniqueness of HyperCloud as it directly addresses slowness that customers face on a server platform with a large amount of memory.

    In the Q3 we also kept up the high level of activity surrounding the qualification of HyperCloud at both end-users as well as targeted OEM customers.

    As you recall the first of our qualification announcements earlier in the year were SuperMicro and Viglen.

    Both companies have now included HyperCloud memory in a number of RFPs (request for proposals ?) for large end-user server installations and in recent months (unintelligible) that initial orders from customers which have the potential to grow into sizable businesses in the future.

    As we mentioned in August, we have sampled HyperCloud to several dozen potential customers and have begun a more formal qualification process with 5 of those.

    As we have previously indicated, we anticipated that the qualification process at most of those customers would last approximately 3-6 months, depending on their particular requirements.

    As we also indicated the qualification process at the large OEMs is much more complex due to the large number of target hardware platforms (and probably combinations ?) and the broader set of application parameters that must be tested.

    At our larger OEM targets we have completed initial component level evaluations that come before system and application-level testing.

    Following that initial testing, it is typical that engineering work must be performed to optimize product for the target application.

    Our engineers have been very busy interacting with our potential customers in this process.

    The most important development in the last quarter around HyperCloud is the fact that we have started to engage major players in the memory industry, both in the OEM space and in the channel in order to extend the reach of HyperCloud into a broad spectrum of customers.

    We hope to see fruits of this effort in the coming months in the form of a formalized industry alliance between NLST, major OEM customers and a major memory supplier.

    While we cannot speak to much details of this project, we believe such an alliance has the potential to move the large part of the memory industry to adopt HyperCloud as the defacto standard for high capacity high performance memory.

    We have been gaining traction in our marketing programs focused on specific vertical computing markets, especially the financial, cloud computing and high performance computing verticals.

    Customers in the financial services industry for instance have a growing interest in HyperCloud’s ability to improve performance of high speed trading of securities.

    The benefit of this vertical is that if the increased performance we can bring translate into a very small advantage in the trading process and execution, that can bring large aggregate increases in profit and that would mean a rapid adoption of HyperCloud in servers used to support the financial markets.

    Cloud computing service providers and companies that rely on mass simulations will value HyperCloud’s high bandwidth for possible large scale deployment of virtualized servers such as those we demonstrated at the InterOp show in October.

    These developments could generate significant savings on hardware, facility, licensing and energy costs, as well as being a greener alternative to massive numbers of less efficient servers being deployed today.

    Our marketing team is working with specialists in each of those verticals and they have made great progress.

    Those types of customers provide several potential benefits to our overall marketing drive.

    First when they recognize and demonstrate the benefits of our technology, they will go to their OEMs and ask for HyperCloud.

    Secondly, once we are qualified at an end-user, the volumes can be significant as the benefits can provide an ROI (return on investment) immediately as the technology is implemented.

    On the R&D (research and development) front, we continue to invest significant resources into the development of the next generation HyperCloud, designed to work with the next generation of server chipsets from Intel (INTC) and AMD.

    This is an important undertaking as we extend the benefits of the HyperCloud technologies into higher-speed, multi-core servers running at an excess of 2GHz clock speeds.

    Here again, we are starting to work closely with industry’s main players and ecosystem providers in order to facilitate faster, more seamless adoption in the next generation.

    We expect to have the new chipsets ready for customer sampling in the second half of next year, ahead of the launch of Intel’s Romley server platforms.

    On the intellectual property front, we were awarded a new patent recently for our Planar-X technology.

    Through the use of innovative circuit design, Planar-X can provide our customers with additional board space for large number of DRAM components, thus achieving higher capacity register DIMM modules at a lower cost (because can use cheaper low density DRAM components to make same sized memory modules as others make with higher-density, costlier per GB memory components).

    This will also be used to develop higher performance lower cost modules when combined with our HyperCloud chipset.

    Planar-X modules are JEDEC compatible, and plug into standard server memory slots.

    So in summary, while the process at some OEMs has been taking longer than we would like, we have been pleased with the high level of interest at those companies and at potential end-user customers as well.

    More importantly, we are taking significant steps to strategically position HyperCloud as the mainstream technology for the memory industry by working to partner with major players and ecosystem providers, in both the current and the next generation of HyperCloud.

    We are also continuing our investment in the next-generation battery-free NetVault.

    We plan to extend the value of HyperCloud and NetVault technologies well into the next decade.

    The end result of all of this activity continues to be a very compelling growth opportunity.

    Gail will now provide you with a more detailed financial update on the Q3 and 9 months results.

    Gail.

    Gail Sasaki:

    Thanks Chuck and good afternoon everyone.

    As you saw in our release this afternoon, revenues for the Q3 ended October 2, 2010 were $10.6M compared to $6.4M for the Q3 ended October 3, 2009. And up from the $9.3M in revenue for the Q2 ended July 3, 2010.

    As reported last quarter, we continue to see increases in sales of our memory modules, into application-specific servers for RAID, industrial and data center optimized applications.

    Overall the sale of our NetVault family of products met expectations.

    However the mix looked different than we expected, being more heavily weighted towards our battery-backed product.

    This mix should change as end-user marketing efforts take hold for the battery-free version.

    In the meantime we are pleased with the overall increase in adoption for the Netvault family.

    Gross profit for the Q3 ended October 2, 2010 was $3M or 29% of revenues, conpared to a gross profit of $1.6M or 24% of revenues for the Q3 ended October 3, 2009.

    Year over year gross profit dollars and margins improved due to the 64% increase in revenue as well as the increased absorption (?) of manufacturing costs, as we product 21% more units than the year earlier quarter with a 5% decrease in factory labor and overhead.

    Net loss for the Q3 ended October 2, 2010 was $4.9M or $0.20 loss per share, compared to a net-loss in the prior period of $2.1M or an $0.11 loss per share.

    Increased losses due to increased engineering sales and marketing costs associated with new technology development and sampling of qualification efforts at various OEMs and end-users.

    These results include stock-based compensation in Q3 of $413,000 compared with $631,000 in the prior year period, and depreciation and amortization expenses of $561,000 in the most recent quarter compared with $560,000 in the year earlier period.

    Revenues for the 9 months ended October 2, 2010 were $27.8M, up 136% from revenues of $11.8M for the prior year period.

    Gross profit for the 9 month ended October 2, 2010 were $6.7 M or 24% of revenues, comnpared to gross profit of $1.3M or 11% of revenues for the 9 month end October 3, 2009.

    Net loss for the 9 months ended October 2, 2010 was $11.9M or $0.51 loss per share, compared to a net loss in the prior year period of $9.9M or a $0.50 loss per share.

    These results include stock-based compensation expense for both periods of $1.2M.

    During the Q3 and 9 months, total operating expense increased to $7.9M and $19.3M respectively, from $4.1M and $11.8M for the year earlier period.

    As noted in our previous call, R&D expenses were anticipated to increase during this quarter in connection with HyperCloud and NetVault next-generation development activity.

    Sales and marketing spending was also expected to grow as we invested in our vertical marketing strategies to work with end-user customers to win qualification slots.

    We expect the higher levels of R&D to decrease over the next couple of quarters as we complete milestone-based second-generation chip development by the end of the first half 2011.

    Sales and marketing spending were also flattened as we deployed the recent investments made in head-count to support our vertical marketing strategies and work with a growing base of customers to secure sockets.

    At Oct 2, 2010, we had provided a full valuation allowance against net before taxed assets. The effective cap spend rate of 5.7% for the 9 month ended October 2, 2010 represents the benefit of a one-time operating loss carryback (unintelligible) the enactment of an economic recovery based tax legislation. On a go-forward basis we anticipate a rate near zero percent until we begin to use a life (? – unintelligible) our fully reserved net before taxed assets.

    We ended our third quarter with cash, cash equivalents and investments and marketable securities totalling $19M compared to $23.1M at July 3, 2010.

    In addition, we have availability of $2.4M on our credit line at the end of the quarter.

    We were a net user of cash during the third quarter – cash was invested in operations to support research and development, sales and marketing and also to support growth.

    Other accounts receivables grew due to increased revenue on our inventory of longer lead time components increased to support fulfillment of fourth quarter purchase orders.

    (At the 15 minute 45 second mark)

    During the third quarter, capital expenditures totalled $326,000 compared to $33,000 in the previous year’s quarter.

    We anticipate investment in equipment to support our new products over the next several months of approximately $700,000.

    During the quarter we underwent (?) through a $10M line of credit with Silicon Valley Bank with an option to increase availability on this line to $15M anytime through the maturity date of September 30, 2012.

    In connection with this increased line, the bank extended a $1.5M term loan, which we will use for working capital and for the capital expenditures plan to support HyperCloud production and test.

    Managing cash continues to be a high priority for us and we continue to aggressively look for ways to reduce cash outlays whilst strategically continuing to invest in our competitive business position.

    Chuck Hong:

    Thanks Gail.

    That concludes our prepared remarks for this afternoon. We can now begin the Q&A session. Val ?

    Operator (Val ?):

    Thank you sir. We will now begin the question and answer session. Plus if you have a reminder, please press the star, followed by the 1 on the touch tone phone …

    And our first question comes from the line of Rich Pugli (?) from Needham and Company.

    Rich Pugli (?) – Needham and Company:

    Good afternoon, Chuck and Gail.

    Chuck Hong & Gail:

    Hi Rich.

    Rich Pugli (?) – Needham and Company:

    Just a few questions. Can you, Gail, give us the revenue breakdown between you know your baseline business and NetVault – those two categories and any HyperCloud revenue.

    Gail Sasaki:

    Um, sure.

    The NetVault battery-free (i.e. using the ultracapacitor) was about 15% of our revenue this quarter.

    And the rest of the business – baseline business – made up about 83%.

    There is some minimal revenues for proof of concept from HyperCloud – would make up the difference.

    Rich Pugli (?) – Needham and Company:

    Ok. And in terms of the operating expenses. Should we assume that this level of R&D for the first half of 2011 continues ? Or does it kick up before falling down again ?

    Gail Sasaki:

    Um. As mentioned in the .. in what we just went over, what we expect R&D to come down from this quarter over the next, you know, several quarters.

    Rich Pugli (?) – Needham and Company:

    So what would be the run-rate after that point.

    Gail Sasaki:

    Would be about 20% less than .. than Q3.

    Rich Pugli (?) – Needham and Company:

    Ok. And then you know obviously, everyone’s interested in the progress of the qualifications at some of the larger OEMs.

    Um .. is there any indication that you’ve gotten from them about when you might be able to move through the next two phases – of the qual (qualification) – and be able to ship as a fully qualified product to one or both and then .. secondly is it possible to ship .. um .. two two key customers without being qualified.

    Chuck Hong:

    Rich, on the first question .. um .. we have gotten some indications that .. it has taken longer than what they had anticipated due to the .. the test coverage .. um .. due to the .. the number of platforms .. um .. the sheer number of platforms that they need to get this thing tested on.

    So it’s the process is taking a little longer.

    In terms of shipping .. to other customers .. uh .. in this quarter – in this Q4 we are starting to see initial orders from end customers. From SuperMicro, qualifications as well as other direct qualifications at end-customers.

    Um .. I think we’ll see small revenues from initial orders and we expect that to ramp .. uh .. starting in Q1 of next year (Q1 2011).

    Rich Pugli (?) – Needham and Company:

    Ok. And then .. lastly any progress on the ‘912 patent reexamination by the patent office ?

    Gail Sasaki:

    Richard. Bring (?) to the activity with Inphi ?

    Rich Pugli (?) – Needham and Company:

    Yes.

    Gail Sasaki:

    Oh in April .. um .. just to get everyone back up to speed. Inphi filed a request for reexamination at the patent office (USPTO) for several patents including the ‘912.

    But we have already been successful in the first round of the ‘912 reexamination. Um .. on September 1st, the patent office confirmed the patentability of all 51 claims of the ‘912 patent. Which is where things stand on that reexam right now.

    Just also to keep in mind that, this is a patent where the original examiner already considered more than a 100 pieces of prior art before he granted the patent.

    Last month, both SMART Modular (SMOD) and Google (GOOG) also filed request for the reexamination of the ‘912 patent. And the patent office is expected to decide whether to grant or deny those requests sometime in January.

    Rich Pugli (?) – Needham and Company:

    So they are basically asking for this same thing that Inphi asked for ?

    Chuck Hong:

    Well, the process is the same. They may have brought other pieces of prior art that the patent office then determines whether there is any new evidence of patentability and if they brought other prior art to the table.

    Rich Pugli (?) – Needham and Company:

    Ok.

    Chuck Hong:

    So the first reexam for us – we had all of our claims in that patent .. allowed. So that went well for us.

    Rich Pugli (?) – Needham and Company:

    Ok. Alright I’ll just step back in the queue. Thank you.

    Gail Sasaki:

    Thank you Rich.

    Operator:

    Thank you and our next question comes from the line of Arnab Chanbay (?) of Rob Capital Partners. Go ahead.

    Arnab Chanbay (?) of Rob Capital Partners:

    Thank you. Couple of questions. One, Chuck could you talk a little bit about in terms of where you are seeing traction – with HyperCloud first – is it more with the end-users that run their own data centers or is it through the OEMs and what’s the time-horizon do you think of to reach to get to volume. Thank you.

    Chuck Hong:

    Um .. currently it is most of the initial sales that we are seeing after long periods of evaluation are from end-users. People who deploy SuperMicro server boards – the Quad-Vs – um .. where we are qualified. Um .. so there is a bunch of big big name customers which we can’t name here but those we have seen initial orders from them and we expect those orders to lead to high volume orders on the coming quarters.

    Uh .. the OEM qualification, again, you know they .. will not do a server model by model qualification. It is either qualified – tested and qualified across all of the platforms or the technology doesn’t get released to the market.

    So you know we think things will break here .. uh .. in the next 3-6 months and as I’ve mentioned in .. in the prepared remarks, we’re working hard now to engage some of the other big-name players in the memory industry .. uh .. to market the technology.

    And so we think that’s .. that could .. you know .. that could change the way this product gets adopted .. uh .. we believe it could become a mainstream technology .. uh .. as we get these other players engaged.

    Arnab Chanbay (?) of Rob Capital Partners:

    Great. And if I could ask a question about the competitive landscape (?), I think SMART Modular (SMOD) has talked about a product that’s similar to, I think, both HyperCloud and NetVault. I think Cypress has talked about doing something related to battery-backup .. I’m sorry non-battery backup NetVault.

    Can you tell a little bit about how that competitive landscape (?) has or hasn’t changed .. uh .. since you kind of made this announcement ?

    Chuck Hong:

    Uh .. the products from SMART Modular (SMOD) or the other memory module manufacturers or DRAM manufacturers that is similar to our product is called the Load-Reduced DIMM – LRDIMM – and that product is not available on the current generation of servers. It will be .. uh .. a product released along with Romley .. at the beginning of 2012.

    Um .. and we’ve done a lot of comparisons. The OEM customers HAVE done quite a bit of comparison .. performance comparison between HyperCloud and the LRDIMM – those that have those samples – and they see latency .. uh .. benefits .. uh .. our product runs at very low latency whereas LRDIMM HAS high latencies which causes basically big big problems in high speed transactions like the high speed .. high frequency trading (in the financial markets) and .. uh .. and simulations.

    So .. that would be .. however that product isn’t available today .. LRDIMM .. we have the only product of it’s kind today for the current generation of Westmere and the AMD Magnacore server platforms.

    The .. in terms of the .. moving forward with .. uh .. you also mentioned Cypress and I believe there is a small group there that is looking at a battery-free product.

    We don’t believe they are shipping that in volume – as we are today. Uh .. we don’t know of any other supplier that is shipping a battery-free .. uh .. RAID backup product. We believe we are the only supplier that is doing that in volume – shipping to a major OEM today.

    Arnab Chanbay (?) of Rob Capital Partners:

    Great. Well thanks for that. I just have one followup question for Gail. I know you talked a little bit about you know controlling expense a little bit. Um .. if you look at your cash position, do you think that you can get to a point where in the next year you will get to you know generating cash you know as your new products ramp up, or do you think that you will have to .. you need additional, you know obviously you did open up the line of credit. I’m just curious what’s your view on that, you know utilizing it or not utilizing it over the sort of next 12 months ? Thank you.

    Gail Sasaki:

    Thanks Arnab. Yes, we believe we can get to a point of cash generation in the next 12 months. Um .. and we will have quite a bit of capacity on our current line (of credit) – that’s why we implemented the $10M line so we will have that available.

    Arnab Chanbay (?) of Rob Capital Partners:

    Ok, thanks Chuck, thanks Gail.

    Chuck Hong & Gail:

    Ok, thank you.

    Operator:

    Thank you. Once again ladies and gentlemen ..

    Our next question comes from the line of Dan Mendelco (?) with Rob (?) Capital Advisors:

    Dan Mendelco (?) with Rob (?) Capital Advisors:

    Got a few please (?). Could you .. just a little more color on the potential partnership with the major memory player. Is that we might hear more about by the next conference call or is that longer time frame ?

    Chuck Hong:

    Um .. we .. are working .. uh .. a handful of major names .. out there. We can’t .. uh uh .. get into too much details about that. But .. we believe there is a good likelihood that we’ll be able to .. uh .. formalize an arrangement by which .. uh .. the other .. players get involved to .. uh .. market .. uh .. the technology .. uh .. so that we can extend our reach .. uh .. into various channels and to the various customers. So we are .. uh .. in a .. deep discussions at this point.

    Dan Mendelco (?) with Rob (?) Capital Advisors:

    Ok. And then you kind of talked about the thing in the formal qualification process with 5 customers. Should we expect that number to increase this quarter or are the resources kind of focused on getting some of those 5 over the line.

    Chuck Hong:

    Um .. I think .. uh .. you know we’ve got samples in at many many different customers and I think some of those customers are looking for a major deployment as an opportunity to you know finish the qualifications. I think the numbers will increase over the course of next few months.

    Dan Mendelco (?) with Rob (?) Capital Advisors:

    Ok. And last question kind of on the core business with memory prices having come in fair amount in recent weeks. Can you give us a little bit of color perhaps your expectations on of what the implications might be on revenue and gross margins ?

    Chuck Hong:

    Uh .. in general memory demand is a little elastic. We think lower the price .. the big chunk of the bill here is .. DRAMs and as that .. as prices on DRAMs fall, we’re going to be able to get to our primary product 16GB 2-rank product at a much lower cost-basis and therefore be able to offer that product at a lower price point to the customers.

    So that .. the implication is that we’re hoping that we’ll lead to higher total available market for that product.

    Dan Mendelco (?) with Rob (?) Capital Advisors:

    I was actually talking about that core business.

    Chuck Hong:

    Oh the core business. Uh .. to the extent that the .. products that we are shipping .. uh .. you know has IP (intellectual property) in it .. and there is you know there is not a ready replacement, I think you know that should there be neutral to positive impact to our gross profit.

    Dan Mendelco (?) with Rob (?) Capital Advisors:

    Ok, so gross profit dollars should be up sequentially in Q4 ?

    Chuck Hong:

    Look .. uh .. we hope so. It’s neutral to positive.

    Dan Mendelco (?) with Rob (?) Capital Advisors:

    Ok. Thank you.

    Operator:

    Thank you, and our next question comes from the line of John Lopez from Sowing Oats (?) Capital Management. Please go ahead.

    Sowing Oats (?) Capital Management:

    Thanks so much. Just one knick-knack (?) one.

    Can you provide just a 10% customers this quarter, the way you disclosed them in the queue (?) case. Thank you very much.

    Gail Sasaki:

    That will be forthcoming with the queue. Though we have two major customers.

    Chuck Hong:

    Two 10%.

    Gail Sasaki:

    Two 10% customers. One is DELL and the other is FFIV.

    Sowing Oats (?) Capital Management:

    And do you have ballpark of where they were this quarter ?

    Gail Sasaki:

    Um .. The ballpark .. I think DELL is close to 65% ..

    Operator:

    Thank you. And I’m sure there are no further questions in the queue. We’ll send it back over to management for closing comments.

    Chuck Hong:

    Ok. Thank you all for joining us and we look forward to sharing further progress in the next scheduled call. Thank you very much.

  269. Q4 2010 earnings call transcript (not exact)

    http://www.netlist.com/investors/investors.html
    Netlist Fourth Quarter, Year-End Results Conference Call
    Wednesday, March 2nd at 5:00 pm ET

    http://viavid.net/dce.aspx?sid=00008211

    Moderator – Matt Lawson (?) of Allen & Caron (NLST’s Investor Relations firm)

    Chuck Hong – NLST CEO
    Gail Sasaki – NLST CFO

    Matt Lawson – Allen & Caron (Moderator):


    Good afternoon Ladies and Gentlemen. Thank you all for joining us.

    And with that I’d like to turn the call over to Chuck.

    Good afternoon, Chuck.

    at the 2:10 minute mark

    Chuck Hong:

    Good afternoon Matt. Thank you all for joining us to discuss the 2010 year end results and outlook for 2011.

    As you saw from our release earlier today, we had another strong quarter with 51% growth in revenue over last year’s Q4.

    And a year over year we more than doubled our revenues.

    We also saw increases in gross profit – 236% growth year over year.

    And 95% growth quarter over quarter.

    And a sequential quarterly increase of 9% in GP (gross profit).

    Much of the growth in the overall business we experienced last year came from our NetVault family of products and our baseline business – which is a combination of flash and other specialized memory modules for data centers and industrial applications.

    We expect the volumes in these businesses to accelerate through this year as our products in this are continue to be well received by the customer base.

    In addition to supporting this baseline growth operationally, we spent a great deal of time and resources last year working to bring HyperCloud to market.

    We started with engineering prototypes at the beginning of 2010 and through the course of the year, in response to customer and partner feedback and requests, we implemented multiple revisions and refinements.

    In this process, we worked closely with most of the major server OEMs, major storage OEMs, end-customers, DRAM and CPU suppliers, and motherboard manufacturers.

    Each of these partners provided important feedback from their perspective to make HyperCloud not only a better performing product, but one that could achieve broad compatibiity with a wide breadth of technical requirements requested by each of our partners.

    They represent a broad spectrum of the entire industry infrastructure.

    Also in this process, there have been many cycles of product evaluation, technical feedback, product refinement.

    And numerous testing cycles in a variety of server platforms and a concerned effort by NLST and our partners to make HyperCloud a more robust and highly reliable product that can withstand the stresses of the harsh data center environment.

    All of this resulted in a longer than expected gestation cycle from prototype to mass production.

    Through this process, our partners have remained very enthusiastic about the technology and the benefits they would eventually derive.

    The partners have also remained patient, recognizing that HyperCloud chipset is inherently a complex product.

    But they also recognized early on that the HyperCloud IP is both a short-term solution, as well a fundamental long-term solution to the growing problem of memory bottleneck in the data center space.

    at the 5:15 minute mark

    So they continue to provide detailed feedback on their individual requirements and what they would like to see in both the current and the next generations of HyperCloud.

    The important point is that HyperCloud in it’s various configurations continues to successfully undergo testing today and is at various stages of evaluations at the OEMs and at end-customers.

    And we expect to see completion of some of these testing in the coming months.

    As a precursor to those events, most recently, 8GB and 16GB HyperCloud products passed extensive battery of certification tests and achieve independent industry certification from CMTL – Computer Memory Test Labs.

    This achievement was a further validation of our interoperability on the current generation of Intel server motherboards.

    On the end-user front, Red Bull Racing announced a 60% greater server utilization when running Formula 1 racing car simulations and computational fluid dynamics (CFD).

    This press release underlined the performance benefits which are currently available with HyperCloud memory.

    HyperCloud memory is also listed on the VMWare website as one of only two memory partners of VMWare.

    As many of you know, as VMWare and other companies innovate and provide ways to increase server utilization at end-users, the need increases for memory performance within each virtual machine.

    And our technology is a key enabler of that pathway.

    HyperCloud is currently included in multiple system configurations for proof of concept testing at VMWare.

    We mentioned on the last call that we had engaged major players, both in the OEM space and in the DRAM space, in order to extend the reach of HyperCloud.

    Our goal here eventually is a formalized industry alliance of major server OEMs, channel partners and CPU and DRAM manufacturers, which results in a broad mainstream supply and use of HyperCloud.

    This would build out the current network of HyperCloud partnership consisting of companies such as VMWare, SuperMicro and MSC Software.

    It is difficult to determine an exact timeline of a broad industry alliance.

    What we are working – day by day, one company at a time, in order to accelerate the adoption of HyperCloud as the defacto standard for high-capacity, high-performance memory.

    HyperCloud is a technology, we believe, that has been designed for where the server is headed in the coming decade.

    We are encouraged in the progress in all areas of our business this year and anticipate continued growth for each product family.

    We see the revenue mix changing in 2011 in favour of our flagship products – HyperCloud and NetVault NV.

    With reinforce our value of our intellectual property (IP) porfolio, and strengthens our competitive position.

    But we also expect our flash and baseline business to grow on a steady ramp through the course of this year.

    at the 8:40 minute mark

    Since our last call, we announced the qualification of NetVault NV by Compellent Technologies for production shipments in the Compellent Enterprise Network Storage Solution.

    Due to the broad-based market interest in the NetVault-NV technology, we have been in development and plan to introduce a new product platform utilizing our proprietary “Vault Controller” in the coming weeks.

    We foresee a significant revenue increase in the flash-backed battery-free products in 2011.

    On the R&D front, we continue to invest resources to complete the development of the generation 2 HyperCloud chipset.

    This is designed to work with the next generation of server chipsets from Intel and AMD.

    This is an important undertaking for us as we extend the benefits of HyperCloud technologies into higher speed, multi-core servers, running in excess of 2GHz clock speeds.

    The next generation of HyperCloud will also consume less power.

    We have recently started customer sampling of prototype parts of this generation 2 HyperCloud, well ahead of the OEM qualification cycle.

    On the intellectual property (IP) front, we continue to make progress as we were recently awarded two patents protecting the company’s innovations that utilize rank-multiplication and load-reduction technologies.

    One of these patents further extends the company’s intellectual property claims related to rank multiplication.

    This technology, used in HyperCloud memory modules, enables the system’s ability to address more memory capacity, in a standard 2 processor server.

    In addition, rank-multiplication technology provides HyperCloud the advantage of using the mainstream 2Gbit DRAM vs. the higher cost-per-bit 4Gbit DRAM, which was recently introduced for making the high-capacity 16GB 2-rank registered DIMMs for server memory.

    Also on the IP front, many of you have an will note the recent actions by other players in the memory space to challenge our patent position related to a number of platforms, including HyperCloud.

    While these processes will need to run their course, we are comfortable in our position and confident in the enforceability of our patents.

    It is also interesting to see that more companies are attempting to use our IP, or challenge our ownership.

    While we do not believe these efforts will succeed, we believe they are the result of a belated recognition that HyperCloud is the most optimal technology available to address the growing memory constraints in the data center server.

    at the 11:20 minute mark

    So in summary, we continue to position HyperCloud and NetVault as technology standards for the industry, while working to get these products to market, in order to monetize the IP which resides in them.

    At the same time we continue to invest in the next generation of both product platforms.

    Gail will now provide you a more detailed financial update on the Q4 and year-end results.

    Gail ?

    Gail Sasaki:

    Thanks Chuck and good afternoon everyone.

    As you saw on our release this afternoon, revenues for the Q4 ended Jan 2, 2011 were $10.1M, up 51% when compared to $6.7M for the Q4 ended Jan 2, 2010.

    In the Q4 we continued to see overall growth in the sales of our memory modules, and the sales of our memory modules into application-specific servers for RAID and data center optimized applications.

    Revenue for our NetVault family of products increased from prior year’s quarter by 65% and year-over-year by 168%.

    The NetVault mix during the Q4 was a bit different than expected as it was weighted towards our battery-backed product.

    That mix will reverse during the early part of 2011 toward the higher ASP, more robust feature set and battery-free version of NetVault (NetVault-NV) as our OEM partners’ marketing efforts take hold and they see improved order traction from their customers seeking the operating, ecological and economic advantages of that product.

    HyperCloud sales, although still not in production volume, were associated with orders for proof-of-concept at end-user customer targets.

    Sales of the more commodity-like RDIMM and industrial SO-DIMM products did come under some pressure in Q4 due to the greater supply of DRAM and subsequent decrease in pricing of the last couple of quarters.

    at the 13:30 minute mark

    Gross profit for the Q4 ended January 1, 2011 was $3.3M or 32.6% of revenues.

    Compare to a gross profit of $1.7M or 25.1% of revenues for Q4 ended Jan 2, 2010.

    The year-over-year gross profit dollars and margins improved due to the 105% increase in revenue as well as the increased absorption of manufacturing costs as we produced 88% more units than the year earlier quarter with no related increase in the cost of factory labor and overhead.

    We are planning on a range between 25% to 30% for our gross profit percentage during 2011.

    Which will be dependent on the product mix, DRAM cost and continued absorption of manufacturing cost in each quarter.

    Net loss for the Q4 ended Jan 1, 2011 was $3.2M or a $0.13 loss per share.

    Compared to a net loss in the prior period of $3.0M or a $0.15 loss per share.

    These results include stock-based compensation in the Q4 of $261,000, compared with $257,000 in the prior year period.

    at the 14:40 minute mark

    And depreciation and amortization expense of $596,000 in the most recent quarter, compared with $557,000 in the year ago period.

    Revenues for the year ended Jan 1, 2011 were $37.9M up 105% from revenus of $18.5M for the year prior.

    Gross profit for the year ended Jan 1, 2011 was $9.9M or 26.3% of revenue, compared to gross profit of $3.0M or 16% of revenues for the prior year.

    at the 15:20 minute mark

    Average product ASPs (Average Selling Price ?) have increased by 136% from $22 to $52 year over year.

    This increase is mainly due to the product mix trends.

    With a planned mix change towards more NetVault-NV and HyperCloud, we anticipate further increases in the average ASPs throughout 2011.

    Quarter over consecutive quarter we saw a 19% decrease in the average ASPs, partially due to the declines in DRAM pricing, but also due to the change in the product mix, as we discussed earlier towards our lower ASP based battery-backed product (NetVault-BB).

    Net loss for the year ended Jan 1, 2011 was $15.1M or a $0.64 loss per share, compared to a net loss in the prior year of $12.9M or a $0.65 loss per share.

    The increased loss was due to increased engineering, sales and marketing costs associated with new technology development and sampling and qualification efforts at various OEMs and end-users.

    These results include stock-based compensation expense for the year ended Jan 1, 2011, of $1.5M compared with $1.5M in the prior year.

    Total operating expense declined to $6.5M from the $7.9M in the previous quarter as we had estimated during the last quarter’s call.

    at the 16:45 minute mark

    We expect that operating expenses will be flattish at the Q4 level during the first half of the year, and then ramp slightly in the second half.

    Year-over-year total operating expenses increased from $16.4M to $25.8M, primarily due to increases in non-recurring engineering charges, headcount, material expenses related to product builds, primarily for HyperCloud development and legal fees, as we increase patent filing and protection activities in the high performance computing market.

    The sales and marketing spend has also grown during the year as we expanded sampling and qualifications activities by a large percentage.

    And invested in a new head count necessary to execute our vertical marketing strategy of engaging with end-user customers directly, including programs that are moving forward with industry leaders in financial services and virtualization.

    at the 17:40 minute mark

    Sales and marketing expense is also expected to somewhat flatten in the coming quarters as we have readched the level necessary to support our target vertical programs and to work directly with the growing base of existing and potential customers to secure sockets.

    at the 17:55 minute mark

    These increases in R&D and sales and marketing throughout 2010 have been partially offset by a small decrease in SG&A expense between years, and also between consecutive quarters.

    Earlier we also mentioned a significant increase in manufacturing productivity with no related increase in cost.

    We anticipate continued productivity increases as we go forward in 2011.

    at the 18:20 minute mark

    At Jan 1, 2011 we have provided a full valuation allowance against net deferred tax assets.

    The effective tax benefit rate of 5% for the year ended Jan 1, 2011 represents the benefit of a one-time operating loss carryback, resulting from the announcement of an economic recovery-based tax legislation.

    On a go-forward basis we anticipate a rate near 0% until we begin to utilize our fully reserved net-deferred tax asset.

    We ended our Q4 with cash, cash equivalents and investments and marketable securities totalling $16M, compared to $19 at Oct 2, 2010.

    In addition we had unutilized availability of $2.2M on our credit line at the end of the quarter.

    at the 19:20 minute mark

    We were a net user of cash during the Q4, as cash was invested in operations – to support R&D, sales and marketing and also to support growth as our accounts receivable grew as a result of increased revenue.

    And our inventory of longer lead-time components increased the support fulfillment of Q1 purchase orders for our NetVault family and base business and qualifications activities for HyperCloud.

    at the 19:30 minute mark

    During the Q4 capital expenditures totalled $224,000 compared to $84,000 in previous year’s quarter.

    We anticipate investment in equipment to support our new products over the next several months of approximately $500,000.

    at the 19:50 minute mark

    We continue to be mindful of our cash use and will continue to find ways to control our burn rate, even as we continue our expressive (?) cross-product (?) and marketing initiative.

    We expect to use a mix of cash and some credit from our line to finance these investments until we reach financial breakeven, which is expected later this year.

    at the 20:10 minute mark

    We also anticipate sufficient capacity on our current $15M line of credit for working capital needs.

    Question & Answer session:

    at the 20:45 minute mark

    Rich Kugele – Needham & Co:

    Thank you. Good afternoon.

    Uh .. just a few questions from me. I guess first .. um .. on HyperCloud. Last quarter you had talked about a component issue that had forced a kind of a .. restart in the qual process.

    Today you are talking about being back in qual, so I assume the component issue has been resolved and .. any comments there ?

    Chuck Hong:

    Uh .. Rich .. you are referring to .. uh .. DRAM specific component issue.

    Yeah that was resolved .. uh .. as of probably 2 months ago.

    Rich Kugele – Needham & Co:

    Okay .. um .. and from a .. from a breakeven standpoint, what should we assume the revenue would need to be, and is it possible to reach that revenue at some point in 2011 .. um .. if HyperCloud isn’t a material part of the mix ?

    Gail Sasaki:

    Hi Rich. We expect about a $20M revenue per quarter for breakeven. And it is possible to reach that with our our baseline business plus our NetVault family.

    Rich Kugele – Needham & Co:

    Uh .. are you willing to give us a sense in Q1 on what the mix might be between the base business and the NetVault line ?

    Just .. a relative mix between the two categories.

    Gail Sasaki:

    Um .. I would if Q1 was over, but I think it is still a little early.

    Rich Kugele – Needham & Co:

    Okay. Alright, well I’ll get back in the queue. Thank you.

    Gail Sasaki:

    Thanks Rich.

    at the 22:45 minute mark

    Arnab Chanda – Roth Capital:

    Yeah hi. Couple questions. First for Chuck maybe you could tell us a little bit about .. does it seem like .. maybe I misunderstood please let me know .. that you know really it’s more of a NetVault 2 (means “HyperCloud 2″ – corrects below) that is going to see any kind of adoption .. because maybe the OEMs first want to evaluate and take a look at .. you know technology that’s so different .. than you know what they’ve used in the past ?

    Or is there a possibility that you’re going to get .. I’m sorry .. I’m talking about HyperCloud .. “HyperCloud 2″.

    Is that is that more likely .. could you see some adoption on “HyperCloud 1″ ?

    I’ll followup .. thank you.

    Chuck Hong:

    Yeah, Arnab. The current product that’s in testing and qualification is obviously “HyperCloud 1″, and it’s gone through .. uh .. many months and quarters of testing.

    And once that’s completed that will start to ship into the current server base – mostly Westmere, the Intel Westmere as well as the AMD Magny-Cours based servers.

    Uh .. the gen-2 product is targeted for the next-generation and that will be the Romley, which is expected to launch at the end .. very end of this year.

    at the 24:35 minute mark

    There will be Westmere .. uh .. will continue to ship well into 2012, so .. uh .. we expect to see the HyperCloud 1 product ramp .. uh .. this year, after qualification, and be sold you know well into 2012.

    While we will get the HyperCloud 2 product out – that is faster and that is lower power and we will .. you know .. start get those products evaluated .. early.

    And to get them qualified and get ready to ship you know when Romley launches at the end of the year.

    So probably see them .. uh .. ship concurrently.

    Arnab Chanda – Roth Capital:

    Ok, great. If I can ask another qualitative question about the adoption of HyperCloud. Seems like it is roughly taking at least a year longer than maybe what what you thought about or hoped for initially.

    What .. are there kind of .. could you talk about what the factors are .. is it because the product has certain issues, is there market adoption question .. can you talk a little bit about what you think has caused it to take longer than you .. than originally had thought.

    at the 25:55 minute mark

    Chuck Hong:

    I don’t think it took a year longer than we anticipated .. uh .. you know this is a .. as you know I mentioned this is a highly complex chipset, and you’ve got a fairly complex ecosystem of CPU, BIOS, you have server .. uh .. OEM server manufacturers, you have DRAM manufacturers.

    All of this has to come together seamlessly, and you’ve got you know any one of these OEMs, several dozen server platforms.

    So it was .. quite a bit of WORK.

    Uh .. in terms of making the product plug and play and compatible.

    So .. uh .. probably took you know longer than we expected, but certainly was not a year longer than we had anticipated.

    This product as we mentioned was in prototype form at the beginning of last year (2010).

    It’s been about 12 months since that time.

    And .. uh .. things are progressing very nicely at this point.

    at the 27:10 minute mark

    Arnab Chanda – Roth Capital:

    So .. uh .. Chuck .. I’m going to ask a question on that .. uh .. first two questions if I could.

    One is – do you think you could have invested more in R&D – is that kind of your more cautious on investment and that’s part of what it took longer ?

    Or .. and then secondly do you expect any revenues from HyperCloud at the end of this year, or is it more likely to be in 2012 ?

    at 27:30 minute mark

    Chuck Hong:

    Um .. we invested .. you know there are different parts of the R&D that go into a product like this.

    First is on the chipset itself.

    Uh .. the architecting, the design, the specing out (spec = specification) of the chipset.

    Implementing the silicon .. um.

    And then you have to bring up in the application level .. uh .. you know in retrospect we probably could have .. uh .. spent more resources on the latter part of the R&D.

    That’s you know those are kind of the last mile issues that issues took us a long time to .. uh .. you know resolved and get our arms around.

    The other thing was that customers at the different server manufacturers – they’re constantly tweaking as well.

    DRAM manufacturers are tweaking their DRAMs and server are tweaking their server boards – the thermals, the electricals and so forth.

    So .. uh .. you know we .. it is a gauntlet that took us longer than we had anticipated to get through.

    Lot of good feedback in that process from the customers and the enablers – the technology enablers that are out there.

    And you’ve got a much more robust solid product than we had started out with a year ago.

    Uh .. in terms of revenue traction. Definitely .. uh .. you will see .. it will not be 2012.

    You will see traction .. uh .. you know, fairly quickly.

    Arnab Chanda – Roth Capital:

    Thanks Chuck.

    Gail Sasaki:

    Thanks Arnab.

    at the 29:35 minute mark

    Orrin Hirshman – AIGH Investment Partners:

    Hi how are you.

    Um .. can you just mention a little bit in terms of utilizing the credit line etc.

    And the implication of your comments was that you can survive without raising additional equity .. until you can get to be profitable, cash flow positive.

    Can you comment a little bit more on that – number one. And then I will followup on HyperCloud.

    at the 30:00 minute mark

    Gail Sasaki:

    We believe that we have (unintelligible) cash and working capital availability on our line for the next 12 months.

    And we .. you know if .. if and when (?) we will consider you know sources of capital raising .. um .. to buffer our balance sheet.

    But .. we do not have any plans currently.

    You had a question about HyperCloud ?

    at the 30:30 minute mark

    Orrin Hirshman – AIGH Investment Partners:

    Yes. You answer the question in terms of – one, we can hope to see HyperCloud revenue .. but .. can you also answer just (if there is) anything on the competitive front that’s really come close that’s slowed you down in terms of the qualification processes at any of the major OEMs ?

    at the 30:45 minute mark

    Chuck Hong:

    Uh .. the competitive products .. uh .. you know have been out there and have been anticipated.

    Uh .. it’s .. uh .. fairly independent of the progress that WE’VE made, and the process that we’ve undertaken to get the product tested and qualified at the OEMs.

    It’s been independent of .. what the competitive .. uh .. products have done.

    We have .. uh .. we have a .. much faster .. uh .. and a better performing product .. than the LRDIMM which is the .. uh .. a similar product that is .. that is going to be available for Romley .. uh .. that product is not available at Westmere .. um .. currently.

    So .. we believe our product .. with higher performance will .. win out.

    And that is .. the feedback that we have received – objective feedback – from the OEMs.

    As they’ve gone through performance testing of our product vs. the LRDIMM in .. in their servers.

    Orrin Hirshman – AIGH Investment Partners:

    Ok, thank you.

    at the 32:30 minute mark

    Ian Mendoza – Prospect Capital

    Uh .. hi guys.

    Had a .. couple of questions. Some of them were were just answered .. uh .. but could you maybe talk a little big about .. uh .. what you’re seeing on the competitive front with .. uh .. with NetVault .. any .. new entrants there.

    And maybe as part of the answer you can remind me if you are sole sourced at DELL, or if they use someone else .. as well ?

    at the 33:00 minute mark

    Chuck Hong:

    Uh .. on the the NetVault .. product.

    Um .. you know there are .. uh .. smaller .. uh .. competitors.

    We .. are .. we believe we are their only .. manufacturer .. of .. a .. uh .. independent manufacturer of .. what we call the “cache to flash” product category .. uh .. that is shipping this product in high volume into major OEMs.

    There is .. uh .. another major OEM that is building a similar product .. in house.

    Um .. so this NetVault product .. uh .. currently .. uh .. at DDR2 is being shipped into .. server .. uh .. RAID backup .. applications.

    Uh .. but we are starting to see much .. more .. opportunities in .. storage .. uh .. which is a much broader, many many more applications in the storage space.

    Uh .. and that’s where we’ll probably. We are entertaining those opportunities today with the current DDR2 NetVault as well as the DDR3 NetVault product which is soon to be introduced.

    Uh .. so .. small customers .. uh .. small competitors for this .. um .. for the current NetVault product .. um .. at at the next-generation for the storage applications, you know, we’ll see who’s out there.

    at the 34:50 minute mark

    Ian Mendoza – Prospect Capital

    Ok. What are the qualification cycle times like for the storage opportunities – are these 2011 opportunities or more .. 2012 ?

    Chuck Hong:

    There are a number of .. uh .. qualifications that are under way .. uh .. today with the existing DDR2 NetVault.

    And we will .. uh .. you know we probably have .. uh .. a dozen opportunities with with DDR3 NetVault in the various storage and industrial type of applications.

    And the qualification cycles are .. uh .. quite long .. um .. with with the NetVault product because it’s .. uh .. you know it’s mission-critical and it’s .. uh .. a lot of integration that needs to happen between .. uh .. our subsystem and and the storage systems.

    at the 35:50 minute mark

    Ian Mendoza – Prospect Capital

    Ok. That’s helpful.

    And I had one one question getting back to HyperCloud.

    Uh .. in the press release about the (testing ?) – I think it made some reference to .. uh .. uh .. memory densities of 288Gbits (should be 288GB i.e. 288GBytes) and I thought that .. uh .. the spec was for 384 (i.e. 384GB) ?

    Has the spec changed as you’ve gone through the process of refining the product or or is this .. so I guess that is the first question, and if it has will the spec be different for “HyperCloud 2″ than for “HyperCloud 1″ ?

    at the 36:25 minute mark

    Chuck Hong:

    Well, I I think in the latest .. uh .. uh .. InterOp show .. uh .. was a few months ago in New York.

    Uh .. we demonstrated 288 GIGABYTES (288GB) of .. memory .. in a server.

    Uh .. which then on the screen showed that it was hosting 100 clients.

    Uh .. with the benefit of 288GB running in that server.

    Uh .. and that that’s been the maximum that we have demonstrated.

    Initially the 384GB .. um .. memory capacity was .. uh .. advertised.

    Our product is capable of .. uh .. running upto 384GB, however server systems are not there .. to do that.

    They would need 4 DIMMs per channel .. uh .. in order to accomodate 384GB – there is currently no server out there.

    The maximum DIMM sockets per channel is 3 today.

    So if they get to 4 .. uh .. we would we would be able to get to those high kinds of densities.

    (Explanation: 4 DIMM sockets per channel x 3 channels per processor x 2 processors in 2-socket server = 24 DIMM sockets total in that 2-socket server, and using 16GB HyperCloud would give a total of 24 x 16GB = 384GB total memory if you have 4 DIMM sockets per channel)

    at the 37:45 minute mark

    Ian Mendoza – Prospect Capital

    Ok.

    Then were there any .. speed tests done as part of that testing or or that you were able to kind of do at at .. InterOp.

    Chuck Hong:

    Yes .. I believe it was running at .. uh .. 1333MHz (i.e. full speed).

    at the 38:05 minute mark

    Ian Mendoza – Prospect Capital

    Ok. So that part (unintelligible). Very good. That’s helpful.

    And I guess, kind of last question, you know with the qual cycles and the expense of targeting these major OEMs with with NetVault AND HyperCloud, how do you .. how do you prioritize or .. are you forced to prioritize your your resources on on kind of the opportunities that you think are best ?

    Or are you able to kind of go out there and .. and kind of .. be in all the bakeoffs (i.e. contests).

    at the 38:30 minute mark

    Chuck Hong:

    Well .. with the major OEMs and .. uh .. you know .. uh .. major server and storage OEMs, that that storage .. that prioritization has already .. you know .. is defacto in place.

    I mean that .. that took place, those decisions were made on .. uh .. where to qualify, who to engage, you know many quarters ago.

    And .. it’s just taken a long time to work through that process and .. that’s that’s you know we feel like we’re getting down to the .. you know 2-yard line on all of those .. uh .. qualifications.

    And so those .. those priorities have already been set.

    And in terms of the new priorities, with NetVault .. uh .. you know you go through .. uh .. an ROI (return on investment) analysis of the .. uh .. the potential opportunities .. uh .. and .. uh .. you know we pick and choose the the most attractive ones as would ANY business.

    Ian Mendoza – Prospect Capital

    Very good. Alright. Thanks guys.

    Gail Sasaki:

    Thank you.

    Chuck Hong:

    Thank you.

    Operator:

    .. no further question .. like to turn the call over to management for closing remarks ..

    at the 39:45 minute mark

    Chuck Hong:

    Thank you all again for .. uh .. being involved with NLST and we look forward to sharing further information on our progress in the upcoming quarters.

    Thank you very much.

  270. http://www.netlist.com/investors/investors.html
    ROTH OC Growth Stock Conference
    Wednesday, March 16, at 11:00 am Pacific Time

    http://www.wsw.com/webcast/roth24/nlst/
    Wednesday, March 16, at 11:00 am Pacific Time

    23rd Annual ROTH OC Growth Stock Conference
    March 13-16, 2011
    Laguna Niguel, California
    Roth Capital Partners

    ——————–
    Moderator:

    I’m very pleased here to welcome another .. of our local favorites – Netlist Inc.

    And we are very fortunate to have .. uh .. Chuck Hong, President and CEO, to make the presentation.

    Chuck Hong:

    Well, thank you for joining us today and .. uh .. I’d like to take this opportunity .. to go through .. uh .. the memory challenges .. uh .. server memory challenges in cloud computing.

    And how NLST solutions .. uh .. help address and resolve some of those .. uh .. issues.

    (pause)

    Brief overview of the company. We were founded 10 years ago here in Irvine (CA) .. uh .. two miles down the road.

    We .. got public on NASDAQ at the end of 2006 .. uh .. and we have a factory.

    Most of our design, sales and marketing .. uh .. work is done here in Irvine and the manufacturing of our end memory module products are done in .. Suzhou, China outside of Shanghai.

    Over the years most of our business has been done with .. uh .. major OEMs .. uh .. IBM, HP, DELL, FFIV.

    In the past 3 years since IPO, the company’s embarked on a couple of major breakthrough technologies .. uh .. one is HyperCloud, and the other is NetVault.

    We’ve got an extensive patent portfolio build around those .. uh .. two technologies.

    Our target market is .. uh .. is in the cloud, in the data centers .. uh .. for storage and servers.

    And combined, it’s about a $3B addressable market .. uh .. this year. And growing.

    at the 2:10 minute mark

    Uh .. as many of you have heard about the .. the emergence of cloud computing .. um .. I think the thing to note is that cloud computing – most of the applications in the cloud are highly .. uh .. DRAM memory-intensive.

    You have various different kinds of memory which we can go through here, but .. DRAM is the main .. uh .. memory in a server which .. uh .. interfaces with the CPU.

    Uh .. so in .. in a lot of the social networking, video downstreaming, virtualization .. uh .. where you’re reducing the number of servers to get more efficiency out of each.

    High performance computing (HPC) where you are doing simulations to .. uh .. and modelling .. um .. securities trading. All of these require .. uh .. quite a bit of .. uh .. they are all DRAM-intensive.

    at the 3:10 minute mark

    On the other hand, on the supply side .. um .. you see huge shifts in the DRAM landscape and .. (pause) .. for the first time this year, flash will exceed DRAMs in terms of worldwide shipments .. uh .. and DRAM investments will be decreasing. You’ll have less and less DRAM manufacturers putting in big dollars.

    Uh .. they face financial as well as technological .. uh ..difficulties in progressing DRAM density .. uh .. over .. DRAMs have been around 30 years, but it’s kind of now hitting the ceiling in terms of .. uh .. the density progression.

    And so DRAM technology is not keeping up, despite the increases in the demand for DRAMs.

    at the 4:10 minute mark

    So .. so all the pace of technology creates the need for faster and denser memory.

    These are some of the variables:

    – multicore processors that are built by INTC and AMD require more memory
    – virtualization – fewer servers doing the work of many servers, require more high density memory (high density because number of DIMM sockets being limited)
    – and cloud computing .. uh .. where you have server consolidation, requires more memory (each VM running in VMWare requiring 4GB or so per VM, for example, with each processor core running a few VMs per core)

    That results in what we call a “server memory gap”, where as you can see here starting in the next couple of years, you will see a huge gap between what the ideal memory is .. uh .. needed in these servers, compared to what will be available from the industry .. uh .. without our solution.

    at the 5:05 minute mark

    The other way to frame this problem is I/O congestion (I/O = input/output).

    And I know .. uh .. there has been a lot of talk about .. within the network .. uh .. I/O bottlenecks creating problems in cloud computing.

    If you were to do more of .. uh .. you were to run your servers more efficiently and do more of the work within the server, between the CPU and memory .. uh .. there would be less of a need to go OUTSIDE of that server to fetch data.

    So, because the servers of today are not being run efficiently, there is a lot of data having to go out to the solid-state drive (SSD) or to the hard drive. And that is creating .. I/O congestion.

    And some of the factors that are impacting that is .. the write speed of the hard drive, the location of the storage devices relative to the server, and the utilization of the server.

    at the 6:10 minute mark

    So if you look at the various types of .. uh .. memory .. uh .. and this is .. you can look at this as a storage hierarchy of data .. uh .. within a server and then a network.

    Starts with a CPU and there is SMALL amounts of cache memory in an INTC or an AMD CPU.

    And then you have DRAM – that’s your main memory, that’s your volatile memory (volatile meaning it goes away if shut off power).

    You have then PCI-SSD .. uh .. which is a solid state drive being run on a PCI (socket) .. uh .. and Fusion IO is an example of that solution, and then you have rotating media which is the hard drive (rotating disk platters).

    at the 6:50 minute mark

    And then you’ll see those numbers .. um .. DRAMs are run at nanoseconds – 10 nanoseconds.

    SSDs are 10 microseconds (1 microsecond = 1000 nanoseconds – thus 10 microseconds = 10,000 nanoseconds).

    And then you’ve got 100 microseconds for .. uh .. SATA SSD (100 microseconds = 100,000 nanoseconds).

    And then you’ve got hard drive being run at .. milliseconds (1 millisecond = 1000 microseconds = 1000,000 nanoseconds).

    And those are at the order of magnitude of a 1000.

    You see that DRAMs – nothing can get to the speeds of DRAMs.

    And, this .. in the server .. and in the storage space.

    So if you are running a .. if you are pulling up a youtube video.

    If it is run off DRAMs, you are going get a seamless .. uh .. you are going to get good quality.

    If there is not enough DRAM .. not enough FAST DRAM, you would have to, the CPU would have to go out to the SSD to the hard drive to fetch that video. And that’s where you are going to see a lot of the buffering.

    Same thing in .. uh .. financial transactions.

    In high speed trading, high frequency trading, you want to do that off a DRAM and not go out to the hard drive, or you will have lost that trade (because high-frequency trading depends on making a trade well before others in the market and they make money from the small time-difference advantage they have over other traders).

    at the 7:50 minute mark

    So here is a look at our product, and basically the .. the core of this product is the chipset which controls all of the DRAMs.

    You have a register device, and an isolation device.

    One performs what we call “rank multiplication”.

    The other 9 devices perform “load reduction”.

    “Rank multiplication” is simply taking 2 lower-density DRAMs and making it look like one to the CPU.

    “Load reduction” means you are loading .. the the .. you are reducing the load on these chips so that the chips will run faster.

    And those are the two .. uh .. IP – the fundamental IP that we have.

    And our DIMMs, our memory modules reside next to the CPU in a server. And this is what it looks like.

    at the 8:45 minute mark

    So a diagram for what our product does in a server – you have the CPU .. uh .. on top.

    What we are essentially doing is we are making the data transfer from the CPU to the DRAM – main memory – run MUCH faster and allowing the CPU to recognize all of the memory that resides there.

    Without our chipset, with our technology .. uh .. the data would be transferred very SLOWLY and then they would have to go out to the disk drive to fetch the data, which FURTHER slows down the transactions.

    So with our chipset we have a 44% increase in the bandwidth and a 100% more memory capacity that the CPU can recognize and .. and act on.

    at the 9:40 minute mark

    So these are some of the applications that would benefit greatly from .. uh .. the faster and bigger – faster data transfer and wider bandwidth between the CPU and memory.

    – virtualization mem cache (memory cache ?)
    – oil and gas
    – EDA (electronic design automation ?)
    – and search applications

    at the 10:00 minute mark

    So, HyperCloud minimizes I/O congestion.

    Simply we are making that server run more efficiently, so it does not have to go OUTSIDE of the server, and tax the I/O to get to the data.

    And some of the endorsements from the OEM side, validating the values of greatly memory footprint in the systems of the OEMs.

    On the .. on the end customer side .. uh .. companies like VMWare and MSC software .. uh .. see our product and our IP to be complementary.

    They are trying to make their software run that server more efficiently.

    And without the necessary complementary hardware .. uh .. that software will not be able to do it’s job.

    So .. that’s .. these are some of the use case perspectives.

    Um .. we are one of the two .. uh .. memory suppliers, or memory IP providers that are partnering with VMWare.

    And .. uh .. it’s a highly .. complementary offering, as I said.

    at the 11:25 minute mark

    VMWare is trying to get one server to do the job of 10 servers.

    But that 1 server then, through their software, through their virtualization.

    But that one server then needs to have multi-core .. have the hardware .. uh .. that’s .. uh .. got the capabilities to .. run .. uh .. their software.

    at the 11:55 minute mark

    So .. talked a little bit about the rank multiplication and load reduction technologies .. uh .. these IP .. this IP came out of our initial work with AAPL, going back to 2004, where we created a chipset .. uh .. that is .. uh .. running the AAPL X Server (?).

    And that was being run off an IBM Power CPU (PowerPC).

    And that .. uh .. they were a sliver of a market back then, but that particular problem that we solved, working with AAPL .. uh .. has led to all of this IP creation.

    And .. uh .. that problem of the I/O bottleneck between the CPU and the DRAM today is an industry-wide .. uh .. problem that exists in .. all servers.

    at the 12:45 minute mark

    Um .. so we we believe that this IP allows to be .. allows us .. positions well .. uh .. for the future where where the industry’s going, because they are going to REQUIRE these rank multiplication and load reduction technologies.

    at the 13:00 mark

    So where is .. uh .. the product today .. uh .. in terms of adoption .. um .. it’s been about a year’s worth of work that we’ve undergone with the major server and storage OEMs around the world.

    And it is .. uh .. currently, we believe that the market is .. blanketed .. uh .. with our products.

    It’s at all of the major .. uh .. major OEMs .. uh .. we are also having it currently tested by one of the major CPU vendors.

    Uh .. we’ve got motherboard venders .. uh .. qualifications and CMTL, which is the INTC-compatible memory qualification lab.

    Those qualifications have been achieved, so we believe that we are .. uh .. making good headway and .. uh .. towards .. uh .. achieving broad market adoption this year.

    at the 13:55 minute mark

    And the market opportunity for this .. uh .. HyperCloud product is .. significant .. uh .. looking out .. (pause) .. today we’re estimating this to be a couple of hundred million dollar market opportunity for us, growing to a billion dollars in the next 3 years.

    at the 14:20 minute mark

    So that was HyperCloud – I want to go through a complementary product .. uh .. which we announced yesterday .. uh .. called the EXPRESSvault.

    And this is a product you have a lot of .. HyperCloud .. if you look at HyperCloud, that .. uh .. that makes the server run efficient by getting the CPU and main memory to talk to each other much more quickly, efficiently.

    What THIS product does is .. uh .. backs up that data.

    If there is a power outage .. some sort of interruption .. uh .. you are going to have that data, which is volatile and live – an ATM transaction that is ongoing – if the power goes out, that data needs to .. survive.

    And that’s what we do with EXPRESSvault. We are backing up that volatile data.

    And this is some .. this shows how important .. uh .. data protection is .. is that .. uh .. many companies actually experience .. uh .. data loss and .. uh .. when they do, one of 3 go out business within 2 years.

    So protection of volatile data .. uh .. transaction data, esp. mission-critical data, within a corporation .. uh .. is very important.

    And the target markets for this product is similar .. um .. it goes into a server as well as storage applications, into grid-computing and a lot of electronic financial trading applications.

    at the 15:55 minute mark

    And here is .. this may look complex, but it is not. This is actually how that data gets backed up.

    You have the data blocks entering on the left into the CPU – that data then goes to the DRAMs and goes back and forth with the CPU – CPU does the data compute.

    If the power goes out in the middle of that data compute, that data is stored in this product – the EXPRESSvault.

    And from that storage it can store it into the hard drive and pull it back out and then .. and then give it back to the DRAM and the CPU when the power is restored.

    at the 16:35 minute mark

    So here is what this product looks like, and some of the IP that resides in this.

    It is a PCIExpress interface (PCI = socket on motherboard).

    And it is a .. uh .. the bridge controller device which converts PCI to DDR2 (DDR2 DRAM memory) is our IP.

    And we create that engine and .. uh .. also importantly is a NetVault module which we’ve .. uh .. already have and are shipping in high volume into the DELL PowerEdge product.

    That is .. that is the IP that takes data from DRAM and stores it into flash at the point of power interruption.

    And you have .. uh .. this ultracapacitor technology which replaces battery.

    And so ecologically you get rid of the battery elimination .. battery problem .. uh .. and the need for the technician to go out to the OEMs to check battery every .. every year, pretty much.

    So in comparison to other potential solutions that are out there, we believe that we have .. tremendous value in terms of cost/performance.

    As you can see our solution .. EXPRESSvault .. uh .. has much higher throughput, is much faster and .. uh .. it’s a relatively inexpensive solution compared to an SSD.

    So that has been introduced into the market in the last few weeks and is being designed in at some of the OEMs.

    So to summarize these two major technologies – they fit very well into the financial markets.

    And we work very closely with .. uh .. big .. big banks and institutions on Wall Street to .. have them recognize directly the value of our IP .. which then they call out to the hardware manufacturers .. uh .. in their .. huh .. technical requirements.

    at the 18:50 minute mark

    As a business, in the last 10 years, we started out building a lot of .. uh. . memory module products that are based on thermo-mechanical innovations – stacking more memory in a given space. And we got very efficient at doing that.

    At a certain point it got to a point where you can pack in all this memory, but .. physically .. but there was a server .. uh .. bottleneck .. between the CPU and the memory.

    And so we got into the .. electrical side of this .. and created chipsets .. uh .. the logic in particular, that facilitiated the data transfer and .. uh .. addressed that bottleneck issue.

    So, moving forward, we are evolving. We are still a memory module company, but we are evolving to .. uh .. also to .. uh .. to become a designer of custom logic, and over time an established semiconductor company.

    at the 19:50 minute mark

    And we believe that today it is a niche market. At the very high end .. addressing the financial markets and also virtualization, data centers, but that will evolve into industry-wide adoption of this technology as the servers become faster. And the DRAM manufacturers continue to have issues trying to progress the density of their DRAMs.

    at the 20:10 minute mark

    So very quickly on some of the financial highlights.

    As you see that .. uh .. our gross margins .. uh .. the product gross margins are steadily increasing.

    R&D (research and development) .. uh .. we have expended quite a bit of R&D to create the next generation HyperCloud last year.

    Uh .. we believe that will .. uh .. flatten out this year, because the bulk of the spending was done last year.

    at the 20:42 minute mark

    Uh .. the balance sheet as of Jan 1, 2011 is .. uh .. shows how we’ve got sufficient cash .. uh .. and resources on hand to continue to .. uh .. roll out our product into the market this year, and continue the R&D efforts as well.

    at the 21:00 minute mark

    So in summary, we are a company .. uh .. that has a long track record with the major OEMs – the guys that we are targeting today to adopt – and that are testing currently our new products – the HyperCloud, the NetVault.

    And then in the end, they also will become the adopter of our IP. So it is important that we’ve got a long-running relationship with these customers.

    We are addressing a VERY large market .. um .. and we’ve got a strong IP position in these two seminal .. uh .. technologies.

    at the 21:45 minute mark

    And as you .. as I’ve just explained that we’ve got flexibility in this business model.

    As we are today .. uh .. a memory module provider, manufacturer, designer, but we are also a designer of custom ASIC (application-specific integrated circuit) logic chip .. uh .. so that can evolve into a fabless IC and an IP licensing model as well, as the market gets .. more mainstream.

    at the 22:15 minute mark

    And .. uh .. so therefore we think there will be long-term ROI (return on investment) on the investment that has been made in the last couple of years and that we continue to make. ROI that’s going out through the rest of this decade.

    And we’ve got a a strong management team which is .. still holds a significant stake in the company .. uh .. and we’re at this for the long term.

    So with that, I’ll .. uh .. open it up for some questions.

    Question and Answer:

    at the 22:50 minute mark

    Moderator:

    Yeah Dave.

    Analyst:

    What’s going to make your TAM (Total Addressable Market) or your SAM (Served Addressable Market – portion that would be able to serve) stand out .. kind of .. trading .. supporting (?) the Jeffries numbers ?

    Why why does that really start to mushroom out. What’s the real change that is forcing that ?

    (Explanation of TAM/SAM/SOM: http://answers.yahoo.com/question/index?qid=20060930204510AA4SAvf )

    at the 23:00 minute mark

    Chuck Hong:

    I think the movement .. uh .. of servers to higher speed is critical.

    There are two things – so on .. on the demand side .. um .. you’ve got servers .. you’ve got cloud computing which means more servers.

    But servers also running much faster.

    Today’s it’s running at about 1GHz (probably referring to the memory bandwidth i.e. 1333MHz, 1066MHz and 800MHz – as you increase memory loading on a server’s memory channel).

    In a couple of years – in a few years it’ll move to 2GHz. That’s a huge jump.

    Without this technology, the CPU will run that fast but .. memory will not be able to .. to run as fast.

    So that’s one .. the other thing .. so at DDR4, our technology .. is looking to be adopted by the industry as the defacto mainstream.

    Today it is a high-end market segment.

    at the 23:55 minute mark

    And then .. so the DRAM manufacturers will continue to have issues progressing .. uh .. their DRAM densities such that .. uh .. we’ll have to use more .. DRAMs to achieve the densities – more more DRAMs.

    So when you use 72 DRAM chips vs. 36 (note this is not 32 and 64 because of error correction in server memory modules and why 36 and 72 is standard numbers they talk about), you are going to need to do the rank multiplication – use that rank multiplication technology that we have.

    at the 24:20 minute mark

    Analyst:

    One one other question. You know we have OCZ (who was here) the other day, which makes solid-state drives (SSD) .. right .. and then you mentioned Fusion IO which is essentially “flash on the board” ..

    Chuck Hong:

    Right.

    Analyst:

    .. that interface (?) .. and then you guys have a different way of of accelerating ..

    Chuck Hong:

    That’s right, that’s right.

    Analyst:

    There are all these different things that are going on to make .. just to make a little bit faster in and out, so .. how do you .. what’s your synergy with with those guys you’re doing (?) and how do you care to play with each other – are they all necessary ?

    at the 24:50 minute mark

    Chuck Hong:

    Well (in) some ways they overlap.

    If CPU does a better job of .. transacting data with the main memory, you will have a less need in that server to go out to the SSD. Right ?

    So .. going out to SSD is not .. uh .. the most efficient way to transact data when you are trying to .. when you are doing high frequency trading.

    Or virtualization or high performance computing (HPC), so .. you know I don’t think we are DIRECT competition, but .. if one of those solutions does .. performs better at lower cost, then .. you know the solution moves there.

    Analyst:

    What’s the interplay like between your product and INTC and AMD, in terms of obviously INTC and AMD realizing that these kinds of bottlenecks are potentially limitations to (unintelligible) of their own product.

    Chuck Hong:

    That’s right.

    Analyst:

    They have a history of .. of incorporating .. uh .. and improving their own I/O and changing their own ..

    Chuck Hong:

    Right.

    Analyst:

    .. designs to incorporate some of these features, so how do you protect yourself from them essentially .. uh .. moving into this space or making changes in their processor design or board design (motherboard) that obviously (?) ..

    at the 26:15 minute mark

    Chuck Hong:

    Right right, so INTC and AMD, as they startup on these server CPU designs – they start 5-6 years ahead of time.

    So I don’t think they .. HERE they did not foresee the .. the onslaught of .. cloud computing and virtualization on the demand side, and on the supply side they probably did not see .. that the DRAMs would not get there.

    Now for the next generation DDR4, they .. I don’t believe .. it doesn’t look like they’re going to make any more changes.

    Their solutions – they’re going to have to .. in order to obviate this kind of a solution (i.e. neutralize NLST HyperCloud), they would have to come up with a .. bigger chip, more pin counts .. uh .. more power consumption.

    That’s a multi-billion dollar plus solution.

    We have .. off of THEIR chipset .. they also see this as more of a “memory industry” problem, not their own, although they are impacted by it.

    So, it’s really the efficiency of the solution.

    Ok, we believe we’ve got a much more efficient solution .. that .. uh .. is not a multi-billion dollar solution. Right ?

    Moderator:

    Ok, thanks .. thank you very much for the presentation.

    Chuck Hong:

    Thank you.

    ——————–

  271. Netlist,
    Thank you for your efforts….Obviously you believe this company is going to make an impact or you wouldn’t go through all this work. Do you have any projections of your own as to when and where this can go. I had some of my own and I invested for the long term, but given Allen & Caron’s failed projections and the time line’s laid out by Hong that have come and gone time and time again. I have my doubts…
    Anyone else has any projects/opinions – I am all ears.
    Thanks again!

  272. memoryGeek

    To the best of my knowledge the lawsuits are on hold pending USPTO patent review in response to request of INPHI.

  273. Hi Wedgecake,

    My initial reason for writing this post was the mystery surrounding why Google acquired metaram’s patent portfolio. That interest has evolved into cheering a little for the underdog, in this case Netlist, as they battle to provide memory modules that can transform ordinary computers into something much more powerful.

  274. Hi Bill,
    It’s a fascinating case and a Google “dilutory” strategy while they “use” Netlist IP. Looks like USPTO has consolidated all suits into one re-exam. I wonder how many years before we can get their initial review results?

  275. To anyone with USPTO experience:

    Could the consolidation of all of the challenges be a sign that USPTO would like to expedite a decision in order to clear repetitive cases on the docket? Hence a more rapid decision.

  276. Hi Wedgecake,

    It is fascinating, and I hope that someone on any side of the litigation is taking notes, with the eventual aim of writing a book about it. I suspect the human side of the story is even more interesting.

  277. Hi Fallguy,

    No specific USTPO litigation experience here, but courts often do consolidate cases in the name of “judicial economy,” so that’s probably something that was carefully considered when the cases were joined together.

  278. Thanks Bill,

    I should have been more clear in my question. I am really wondering if it could be an indication of fast-tracking. I don’t know if they even do that kind of thing, or if it is first come first serve only. To me (I know that means little to the rest of the world) it seems that this is a patent is drawing a lot of attention, and these issue’s could very well be holding up implementation of the technology. Do they even care? Probably not.

  279. Hi Fallguy,

    I worked at the highest level criminal and civil lawcourt in Delaware for more than a decade, so I have seen this kind of consolidation and “fast tracking” take place many times. Judicial economy includes the concept of fast tracking when it makes sense to do, such as consolidating new cases with older ones when many of the issues that might be involved in one case will impact the outcomes of the other cases. I’ve seen many cases put on hold as well, for a decision to be made in one case that can have repercussions for the others. We’re talking about criminal cases that involved death penalties, and civil cases involving some of the largest corporations in the country.

  280. Hi Netlist
    I heard recently that next generation DDR4 is based on Netlist and Intel is talking to CEO Hong about industry wide license like Rambus. Do you think this would be a good deal for NLST to go the licensing route – since it appears that their internal development capabilities are suspect ? (Hong’s 2 yd line is more like 200 yds ?).

  281. quote:
    What is the status of this lawsuit?

    quote:
    Could the consolidation of all of the challenges be a sign that USPTO would like to expedite a decision in order to clear repetitive cases on the docket? Hence a more rapid decision.

    GOOG/SMOD have reexams against NLST patents.
    IPHI has reexams against NLST.

    The USPTO has consolidated a total of 5 reexams into two reexams – one for ‘386 patent, and one for ‘912 patent.

    This is probably simpler for NLST – as they can make consolidated responses as well. In fact given the way reexams are conducted, there may be no better way than to consolidate (esp. when reexams were all similar and happening at same time).

    In these, the USPTO has completed the first office action – which frames the problem in hand – i.e. the “rejection” of the claims of the patent. As the reexam process proceeds the patent is built up from scratch – which is why the reexam process is nearly as long as a patent granting process.

    Court cases were:

    GOOG vs. NLST
    NLST vs. GOOG

    and

    NLST vs. IPHI
    IPHI vs. NLST

    In detail:

    GOOG vs. NLST was the earliest case (thus the most mature case and near jury trial). GOOG initiated this after NLST warned GOOG they were infrining. It was GOOG’s way to prevent injunction against it’s servers.

    NLST vs. GOOG is at an early stage.

    NLST vs. IPHI – where NLST claims IPHI’s “iMB” infringes.

    IPHI vs. NLST was retaliatory lawsuit in which IPHI claimed that two of IPHI’s patents were being infringed by NLST. Patents related to buffers in general and were not specific to NLST IP (or even close to the IP that MetaRAM held – which conceded to NLST).

    All these court cases are stayed at the (JOINT request of NLST/GOOG and NLST/IPHI) pending the reexams. The “stay” means the cases are frozen, but have to be updated with news from reexams. The case otherwise remains frozen (apart from various bureaucratic activity) pending clarity from the reexam process.

    Recently the case IPHI vs. NLST was retracted by IPHI – asking court for dismissal.

    IPHI backing out of IPHI vs. NLST is very interesting. In response to NLST PR about dropping of suit, IPHI issued a PR stating they were doing it because reexam were going so well, IPHI decided to cut legal costs (?). Technically this maybe valid – i.e. IPHI can restart the case. However, it is a very bad signal from IPHI. That is, a company like IPHI may not do this in such a high profile issue (since IPHI is touting itself as a next-gen LRDIMM supplier by providing the “iMB” buffer chipset for use by LRDIMM memory module makers).

    This thread examines the possibility that the real reason for IPHI backdown may have been threat to IPHI patents (2 patents being used in IPHI vs. NLST) being invalidated if IPHI vs. NLST was pursued:

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=31973&mid=32090&tof=4&rt=1&frt=2&off=1
    Re: Bigger news .. INPHI capitulation 22-Apr-11 11:05 am

    This is because in IPHI vs. NLST, NLST had counterargued that the IPHI patents had flaws of “double-patenting” (read the thread for more details) which may invalidate the second patent, and seriously damage the first patent (since newer patents failing to mention earlier patent can wind up restricting the earlier patent).

    The combination of such actions, and IPHI’s hurried IPO (many of same analysts who cover NLST and IDTI – another buffer chipset manufacturer – who FAIL to ask IPHI about their standing in the upcoming LRDIMM market, YET are able to ask NLST about it quite freely).

    Not to be ignored is the participation of a horde of institutions in IPHI’s IPO (and now secondary offering – 3.8M out of 3.9M shares belonging to insiders/management).

    The horde of instituations: Jeffries, Morgan Stanley, Needham, Stifel Nicolaus – all bullish on IPHI and all participated in the IPO.

    Add to that exit of IPHI’s CTO after 10 years.

    Add to that the ABSENCE of a yahoo board for IPHI or at any other place. Can IPHI PREVENT the creation of message boards – and has it done that wilfully to aid hiding of info until “after IPO” or “after we sell”.

    It is surprising these analysts do not ask:

    – why TXN is exiting the buffer chipset for LRDIMM segment (possibly linked to TXN settlement with NLST – which was reportedly favorable for NLST).

    – refuse to ask about legal stuff (which is now in IPHI’s “Risk Factors” section in SEC filings for share offering)

    More in this post (from same thread as above):

    http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_N/threadview?m=tm&bn=51443&tid=31973&mid=32125&tof=3&rt=2&frt=2&off=1
    Re: Bigger news .. INPHI capitulation 4 26-Apr-11 01:32 am

  282. http://www.netlist.com/investors/investors.html
    http://viavid.net/dce.aspx?sid=0000853C
    Netlist First Quarter Results Conference Call
    Wednesday, May 11th at 5:00 pm ET

    Chuck Hong – NLST CEO
    Gail Sasaki – NLST CFO
    Jill Bertotti – Allen & Caron (Investor Relations firm for NLST)

    Jill Bertotti:

    (introductory remarks)

    at the 1:50 minute mark:

    Chuck Hong:

    Good afternoon, Jill.

    Thank you all for joining us to discuss the 2011 first quarter results (Q1 2011).

    As you saw in the release, we got off to a solid start for the year, with 52% revenue growth over last year’s first quarter, as well as continued strong sequential growth of 19% over the 2010 fourth quarter (Q4 2010).

    Margins remained about 30% during the period. And well above the levels of the prior year period.

    The strong sequential growth in the period was due to an acceleration in demand for our Vault (NetVault etc.) family of products, as well as increase in the demand for our speciality memory modules which make up our base business.

    We expect an overall revenue growth trajectory of about 20% per quarter to continue (20% for four quarters is 200% revenue change) throughout 2011.

    This anticipated revenue and gross profit growth should reduce our losses significantly as we target financial breakeven during the second half of the year.

    at the 2:55 minute mark:

    First, I wanted to breakdown our recent commercial successes with the Vault family, previously referred to as NetVault, which is targeted primarily at storage data protection.

    Demand for the battery-free NetVault – NVvault product has been strong and growing over the past few periods. But it really ramped in the first quarter, as demand more than doubled for the product.

    End-users are very satisfied with the increased performance, and are drawn to the cost and environmental benefits that the device helps to achieve.

    at the 3:40 minute mark:

    That demand and our order flow are expected to grow as a result of a new DELL promotional push for their “CacheCade” technology which uses OUR device.

    DELL and LSI have architected a way to increase the performance of an SSD configuration by 76% with an NVvault being an important part of that configuration.

    The end-user response is expected to be very positive in the coming months.

    at the 4:05 minute mark:

    During the quarter we expanded our Vault family of data-protection products with the introduction of EXPRESSvault.

    EXPRESSvault is a PCIExpress backup and recovery solution for cache data protection.

    This product, like NVvault – battery-free – combines DRAM to deliver the high data-throughput required by cache backup applications with a non-volatility flash.

    Early response to the product has been very promising. In part due to the proven track record of NVvault.

    We anticipate that order flow for EXPRESSvault will gain traction steadily in the coming quarters, accelerating in the back end of the year and early in 2012.

    at the 4:50 minute mark:

    To summarize business for our Vault product franchise is strong and growing.

    We saw demand for NVvault battery-free products doubling in the first quarter, and expect it could double again in Q2.

    at the 5:10 minute mark:

    Our flash and SSD business will steadily become a more meaningful part of our revenue mix over the next several quarters as we launch new SSD products and increase our number of wins with data center equipment and embedded system OEMs.

    To that end, we expanded our flash product portfolio with two new SSD additions. The mSATA mini SATA SSD module offers storage capacity of up to 32GB with onboard 64MB DRAM cache. And the mSATA Slim SSD module offers storage capacities of up to 128GB with onboard 64MB DRAM cache.

    at the 5:54 minute mark:

    Both products’ smaller form factor support for ultra-dense applications makes them ideal for data center equipment where compute-density is critical.

    As data center equipment becomes increasingly compact, NLST is committed to offering new solutions that address those space limitations.

    at the 6:15 minute mark:

    Another benefit of our growing success in flash is that it creates another commercial bridge to the storage market.

    Due to the pace of change in growing demand, storage is among the most attractive technology markets today.

    And our participation in that market with our flash and SSDs, Vault data backup and recovery and speciality memory modules, further diversifies our efforts beyond the cloud computing space.

    at the 6:45 minute mark:

    During the quarter, we continued to work closely with the major server OEMs, major storage OEMs, end-customers, DRAM and CPU suppliers, and motherboard manufacturers in ongoing qualification work for HyperCloud.

    One recent result of our efforts is qualification of HyperCloud by CirraScale for it’s VB1325 Blade Servers.

    CirraScale develops built-to-order blade-based computing and storage data center infrastructures.

    The company selected HyperCloud to support it’s Blade Server because of the enhanced performance we bring for memory-intensive applications, such as electronic data design automation (EDA) and high performance computing (HPC) simulations.

    Between now and the end of the year, NLST will be engaged in parallel efforts to win qualifications at the major server OEMs targeting specific market opportunities with Westmere platforms and at the same time we will continue to work towards broader OEM qualifications for Romley platforms.

    at the 7:50 minute mark:

    Potential OEM partners remain enthusiastic and supportive of HyperCloud and the benefits that they derive.

    We are very encouraged with these working relationships as qualification efforts move ahead.

    at the 8:05 minute mark:

    As we move beyond DDR3 and into DDR4 technologies, the market for HyperCloud “rank multiplication” and “load reduction” capabilities will become mainstream for servers, in contrast to the niche high-end markets to us today at DDR3.

    at the 8:20 minute mark:

    At that point, we believe that TAM (Total Addressable Market) will grow .. a total available market (TAM) .. will grow significantly from the $300M to $500M today.

    We are EARLY to this space and ahead of the industry in our design and intellectual property (IP).

    Our goal is to remain in a leadership position as this opportunity escalates.

    at the 8:45 minute mark:

    Due to that market potential, some companies in the memory space have challenged our patent position related to HyperCloud.

    While these processes will need to run their course, we are comfortable in our position and confident in the validity and enforceability of our patents.

    In fact, at the end of the quarter, the USPTO issued 3 new patents that add to our growing intellectual property (IP) portfolio, protecting HyperCloud innovations that utilize “rank multiplication” and “load reduction” technologies.

    at the 9:20 minute mark:

    It is also important to note that none of our current or pipeline technologies rely on any single piece of intellectual property for protection and commercialization.

    In summary, we are executing in all of our core business categories, as evidenced by 8 consecutive quarters of increasing gross profit performance.

    We continue to position HyperCloud and our Vault family as technology standards for the industry.

    We will also continue our investment in the next generations of both product platforms and take advantage of new opportunities in flash and SSD to provide performance benefits and higher density to our customer base in a dynamic storage and cloud computing market.

    at the 10:05 minute mark:

    Gail will now provide you a more detailed financial update and first quarter results (Q1 2011).

    at the 10:10 minute mark:

    Gail Sasaki:

    Thanks Chuck and good afternoon everyone.

    As you saw on our release this afternoon, revenues for the first quarter ended April 2, 2011 (Q1 2011) were $12M, up 52% when compared to $7.9M for the first quarter ended April 3, 2010 (Q1 2010).

    Revenue for our Vault family of products – NVvault battery-free and battery-backed increased from the previous quarter by 12%.

    The NVvault mix during the first quarter was, as we expected during our last call, weighted towards the higher ASP (Average Selling Price), more robust feature-set and battery-free version of NVvault.

    Our OEM partners saw improved traction from their customers speaking to (?) operating, ecological and economic advantages of that product.

    at the 11:05 minute mark:

    Gross profit for the first quarter ended April 2, 2011 (Q1 2011) was $3.8M or 32% of revenues, compared to a gross profit of $1.8M or 23% of revenues for the first quarter ended April 3, 2010 (Q1 2010), an increase in gross profit dollars of 109%.

    This improvement was due to the 52% increase in revenue, a favorable DRAM cost environment, as well as increased absorption of manufacturing cost, as we produced 64% more units than the year earlier quarter with only a slight 4% increase in the cost of factory labor and overhead.

    We continue to plan on a range of between 25% to 30% for our gross profit percentage for the remaining quarters of 2011.

    Which will be dependent on the quarter’s product mix, DRAM cost and continued growth in unit production in each quarter.

    at the 12:00 minute mark:

    Net loss in the first quarter ended April 2, 2011 (Q1 2011) was $2.8M or and $0.11 loss per share, compared to a net loss in the prior period of $3.0M or a $0.14 loss per share.

    These results include stock-based compensation in the first quarter of $353,000 compared with $382,000 in the prior year period.

    And depreciation and amortization expenses of $581,000 in the most recent quarter compared with $578,000 in the year earlier period.

    at the 12:35 minute mark:

    Total operating expense was flattish at $6.6M from $6.5M in the previous consecutive quarter (Q4 2010), as we had estimated during the last quarter’s call.

    The increase from $5.6M in the year earlier quarter (Q1 2010) was primarily due to higher non-recurring engineering charges, headcount and material expenses related to product sales, primarily for HyperCloud and NVvault development.

    Sales and marketing expense was also flat between consecutive quarters but did increase by 18% from the year earlier quarter (Q1 2010) as we have expanded sampling and qualifications activities by a large percentage and invested in new headcount necessary to execute our vertical marketing strategy of engaging with end-user customers.

    at the 13:20 minute mark:

    We expect that operating expenses may increase by 10-15% during the second quarter (Q2 2011) and stay flatting throughout the remainder of the year.

    We did not record a benefit for income taxes for the first quarter ended April 2, 2011 (Q1 2011) as operating loss carry forward generated were fully reserved.

    On a go-forward basis we anticipate a rate near zero percent until we begin to utilize our fully reserved net deferred tax asset.

    at the 13:50 minute mark:

    We ended the first quarter with cash, cash equivalents and investments and marketable securities totalling $12M compared to $16M as of Jan 1, 2011.

    During Q1, we took delivery of $3M of critical long lead time components to support fulfilment of flash, NVvault and HyperCloud product lines.

    We do not expect this level of buy ahead to continue as we now have more visibility into our supply chain after the Japan earthquake.

    at the 14:20 minute mark:

    At the end of the quarter we had unutilized availability of $2.9M on our credit line.

    During the first quarter capital expenditures totalled $110,000 compared to $208,000 in the previous year’s quarter (Q1 2010).

    We anticipate investment and equipment to support increased capacity in our new products over the next several months of approximately $500,000.

    As mentioned in our previous call, we continue to target financial breakeven later this year.

    However we will still be a net user of cash during the year as our accounts receivable and inventory continue to expand to support the increased revenue.

    at the 15:00 minute mark:

    As you know from past calls, we have got sufficient capacity on our current $15M available line of credit for working capital needs.

    In addition, after the quarter end, we signed a term-limit agreement for an infusion of $3M from our bank partner to support general growth needs.

    This gives us an additional buffer as we progress towards cash positive.

    Thank you for listening in today.

    Operator, we are now ready for questions.

    Operator:

    We will now begin the question and answer session ..

    at the 15:45 minute mark:

    Rich Kugele of Needham:

    Thank you. Good afternoon.

    Um .. just a few questions. I guess first you were just talking about the inventory, Gail.

    Um .. did you see any supply disruptions ? Was that a .. just a precaution taking on that inventory.

    Gail Sasaki:

    Actually not seeing any disruptions yet. It was merely precautionary.

    Rich Kugele of Needham:

    Um .. and then just to get into some specifics in terms of the model.

    Can you break down the revenue between the various categories, between the you know traditional business and NVvault, etc.

    Gail Sasaki:

    Sure. Um .. so ok, NVvault battery-free was 31% of our revenue this quarter.

    And the battery-backed version was 30%.

    So total of 61% for the Vault family.

    at the 16:50 minute mark:

    Flash .. um .. and other specialty memory was 38% of the 39%.

    And HyperCloud was minimal.

    Rich Kugele of Needham:

    Ok, and then on previous calls you’ve .. Chuck you’ve talked about there could be some HyperCloud deployments outside of a qual if they were pulled from the customer end.

    Have you had any traction on that front and .. um .. would you expect that to actually happen or would you expect a qual to happen first.

    at the 17:25 minute mark:

    Chuck Hong:

    Hey Rich. We expect to see HyperCloud revenues .. uh .. starting here in the next couple of months.

    We have orders and .. so we’ll see some shipments start.

    We’re still working .. actively to achieve broad qualifications across many different customers.

    at the 18:00 minute mark:

    Rich Kugele of Needham:

    Ok, and then what is the breakeven if you’re talking about being at least I guess EPS positive in later in the year. What would the breakeven be .. um .. and the anticipated OpEx (operating expenditure) I guess at that level.

    Gail Sasaki:

    Rich, I think we’ve mentioned in previous calls that it should be about $20M in revenue.

    With .. with an OpEx of around $7M.

    Rich Kugele of Needham:

    Ok, great. I’ll get back in the queue. Thank you.

    Gail Sasaki:

    Thank you.

    at the 18:50 minute mark:

    Rich Kugele of Needham:

    Ok, that was quick. Uh .. just wanted to get into the SSD side a little bit better.

    Chuck can you clarify a little bit of the comments you are making about how NVvault is being used in .. an SSD system and whether or no you are also referring to your SSD modules also being included in the same system. Or is that a third element.

    Chuck Hong:

    No, that is a different element. You have in the DELL PowerEdge Server .. uh .. we’ve been supplying the battery product as well as the battery-less custom module for many many years.

    So in the last .. uh .. 6 months we started to ship the NVvault and that gets integrated into an SSD configuration that .. uh .. that is designed by DELL and LSI Logic.

    And our NVvault product gets integrated into that SSD product.

    Uh .. which then improves the performance of that product.

    So we believe that is gonna be a catalyst for continued ramp of the NVvault product.

    On the flash product offering that we are starting to build out .. uh .. that is our own product.

    That is targeted more towards industrial and embedded applications, small form factor applications.

    Where they don’t .. the product is not taking up a standard hard disk HDD drive bay the way a SSD is an HDD replacement.

    This product is is much smaller. It is a SATA miniSATA interface and it is going into various different military, industrial and .. uh .. some amount of data center applications where there are space constraints.

    Rich Kugele of Needham:

    Ok. That is helpful.

    Um .. and then just lastly on the R&D front, you talked about a fairly meaningful sequential increase.

    Is that tied to Romley or what is that extra expense tied to ?

    at the 21:45 minute mark:

    Gail Sasaki:

    It is partially Romley. And it is also .. DDR3 NVvault.

    Rich Kugele of Needham:

    So some type of next-gen ..

    Gail Sasaki:

    Yes.

    Rich Kugele of Needham:

    Ok, but you would expect it to stabilize at that level ?

    Gail Sasaki:

    Yes.

    Rich Kugele of Needham:

    Ok. Alright, that’s it for me.

    Gail Sasaki:

    Thanks Rich.

    at the 22:35 minute mark:

    Keith Ellis of Midwestern Analytic:

    Um .. Chuck maybe just want to shift gears a little bit if there are not any further technical questions.

    A concern of our group has been your continued sale of stock, the performance of the company in general and I would also say, based on today’s call, it appears we are no closer to the “2 yard line”.

    Talk a little bit about your motivation to sell stock and a little bit about what you receive in stock based compensation in your plans for the future.

    Thank you.

    Chuck Hong:

    I don’t know whether it is appropriate on the call to talk about my personal .. you know financial transactions. Whatever stock is being sold it is off of a 10B51 plan that has been in place for a long long time, so it has you know it .. that is no different from .. uh .. any other executive stock sales at a public company.

    at the 23:40 minute mark:

    As to you know the performance of the company. We’re .. as we’ve outlined in this call .. uh .. we believe that you know we are doing all the right things to .. continue to build on a foundation of this recovery as you saw from the top line growth, we are confident that the top line will continue to grow through the rest of this year and into next year very .. uh .. consistently.

    Uh .. so that by the you know towards the end of the year we’re getting to the point where we are not losing money.

    So, given it is not as quickly as .. it is not happening as quickly as we would like. But the fact of the matter is the recovery is strong and the business is being build back up quite nicely.

    at the 24:50 minute mark:

    So and we continue to .. work with the major customers on the qualification of the HyperCloud and we believe that it will become a very important technology at Romley and at DDR4.

    So hopefully that addresses your questions.

    Chuck Hong:

    Thank you for listening in and we look forward to continued interest in the quarters ahead. Thank you very much.

  283. http://www.netlist.com/investors/investors.html
    http://viavid.net/dce.aspx?sid=00008B06
    Netlist – 2011 Second Quarter and Six-Month Results Conference Call
    Aug 15, 2011 05:00 PM (ET)

    Participants:

    Chuck Hong – NLST CEO
    Gail Sasaki – NLST CFO
    Matt Lawson – Allen & Caron (Investor Relations firm for NLST)

    Chuck Hong:

    Hi Matt.

    Thank you all for joining us – to discuss the 2011 second quarter (Q2 2011) and six months results.

    As noted in our release, we are pleased with another quarter of strong revenue and gross profit growth.

    In the second quarter, revenues grew by 72% over 2010 second quarter (Q2 2010) and 33% sequentially over (the) last quarter (Q1 2011).

    at the 2:40 minute mark:

    Margins remained about 30% during the period – well above the levels of last year second quarter (Q2 2010) of 20%.

    Losses have been significantly reduced through increased gross profit dollar contribution, even as we continue to invest in R&D (research and development) and targeted marketing programs.

    Our cash-based loss was close to breakeven for the current quarter.

    We continue to target during the second-half of the year, which will be accomplished by steady revenue growth across all product categories.

    Now for some greater detail by product line.

    at the 3:15 minute mark:

    First the Vault family (NetVault, now called NVvault).

    The revenue growth during the quarter was again anchored by continued strong demand for our Vault family of products.

    Both the NVvault flash-backed battery-free (originally called NetVault-NV), and the original battery-backed version (originally called NetVault-BB).

    at the 3:30 minute mark:

    Vault sales in Q2 (Q2 2011) increased by 126% over the previous year’s Q2 (Q2 2010) and by 36% sequentially from Q1 of this year (Q1 2011).

    In addition, since the introduction of ExpressVault, our PCIExpress backup and recovery solution, we are seeing adoption and system integration of our Vault technology at a growing base of new customers and expect to see production revenues from this new member of the Vault family during the second half of this year.

    at the 4:00 minute mark:

    Earlier this month, we introduced and began sampling our next-generation NVvault DDR3 product which combines the high performance of DDR3 RAM with our proprietary Vault cache-to-flash controller.

    NVvault DDR3 extends market leadership (that) we have established with the current DDR2 generation.

    By working directly with CPU manufacturers to facilitate a plug-and-play functionality in the next generation of servers, the NVvault DDR3 offers greater memory capacity and data restore capability in a standard DDR3 interface.

    We have begun sampling this product to a broad base of customers and anticipate production revenues to begin in early 2012.

    at the 4:50 minute mark:

    The flash family.

    In the second quarter (Q2 2011), flash sales more than doubled sequentially from the first quarter.

    This growth is driven by our expanding embedded flash product portfolio and many design wins at multiple customers across medical, industrial and networking equipment segments.

    We recently announced a new mini-PCIExpress SSD which features a smaller form factor, storage capacity of up to 128GB and up to five times the storage density of legacy solutions.

    at the 5:30 minute mark:

    These SSDs uniquely address space constraints challenging data center equipment.

    We noted in last quarter’s call (Q1 2011), our growing flash and Vault portfolios allow us to participate in the high-gross storage market, further diversifying our business beyond the cloud-computing space.

    at the 5:50 minute mark:

    HyperCloud.

    During the quarter we intensified our HyperCloud qualification efforts to server OEMs and end-user customers.

    During the quarter we reached a notable milestone by surpassing $1M in book orders.

    One of our largest end-user customer is a major internet retailer who is now using HyperCloud to upgrade their existing Westmere and Nehalem servers, in order to drive greater performance from their installed base of servers.

    at the 6:25 minute mark:

    Since the end of the first quarter (Q1 2011), we announced three new HyperCloud qualifications.

    On last quarter’s call we discussed the qualification of Cirrascale in April.

    Recently we announced NEC’s qualification and endorsement of HyperCloud.

    NEC will make HyperCloud available with it’s LX-series of supercomputers, enabling various high performance computing applications in industry, academia and research.

    Lastly, Ciara, Canada’s leading integrator of Intel-based servers, qualified HyperCloud with it’s Altas servers and Titan graphics processing unisystems.

    Ciara customers are now able to run more advanced memory intensive simulations within a given time frame and increase of overall productivity.

    In addition, HyperCloud memory modules were integrated, tested and validated with industry-leading NexentaStor open source software, reinforcing the product’s ability to support memory-intensive applications such as virtualization and storage.

    at the 7:35 minute mark:

    During the quarter we also made solid progress with other OEMs in the process of qualifying HyperCloud for the Romley platform – Intel’s next-generation of server CPUs.

    As we stated in the past, unlike the LRDIMM memory, HyperCloud does not create significant additional system latency, and does not require special software support in the Romley in order for the memory to operate.

    Still, our engineering team continues to work closely with the major server OEMs to run through the litany of component and end-system tests on the OEM’s Romley platforms in order to ensure the long-term reliability of the product.

    We expect testing to be successfully completed whereupon HyperCloud technology would be formally adopted by the major OEMs.

    at the 8:30 minute mark:

    We expect HyperCloud products to be made available for sale by these OEMs concurrent with their launch of the Romley-based servers end of this years, or beginning of next year.

    at the 8:38 minute mark:

    Specialty DIMMs.

    This month we introduced two new specialty DIMMs to NLST’s growing portfolio of products.

    Both products address the specific demands of high-performance computing, cloud computing, data analytics and virtualized data center environments.

    The first product, HyperStream is a low-latency server memory for high-speed applications available in 4GB and 8GB DDR3 configuration.

    We are excited to have DELL’s Software and Peripherals Group make HyperStream available for sale, and in recent weeks a major financial services firm has placed an initial order of HyperStream for deployment in it’s data center.

    We expect HyperStream sales to grow over the coming quarters.

    at the 9:30 minute mark:

    Second, our 16GB quad-rank Very Low Profile RDIMM or VLP RDIMM delivers high density memory into space-constrained systems such as blade servers, storage bridge bay (SBB – for example Boston Limited’s Igloo 3U NXStor SBB appliance) and networking equipment.

    NLST invented the VLP module in 2003 based around an IBM specification, and have continued to improve it’s design over the years.

    Recently, we applied our patented Planar-X technology to the VLP design to achieve 16GB density using the lowest cost per bit 2Gbit DRAM (seems 2Gbit x 2 i.e. dual-die compared to competitors’ 4Gbit x 2 dual-die which is more expensive).

    By comparison, competitors must use the much more expensive 4Gbit DRAM to achieve the same 16GB density on a VLP.

    at the 10:21 minute mark:

    With NLST quad-rank VLP, OEMs are able to meet the needs of data centers’ demand for main memory capacity, while drastically reducing the cost of that memory (because 2Gbit x 2 dual-die is cheaper per GB than 4Gbit x 2 dual-die memory package).

    We are currently in qualification with a major OEM and anticipate production revenue in Q4.

    at the 10:40 minute mark:

    As I mentioned earlier, we had a very positive quarter.

    Not only with revenue growth and financial progress.

    But with the addition of several new compelling products and continued progress with HyperCloud qualification efforts.

    We will continue our investment in the next generation of our flagship product platforms – HyperCloud and the Vault family.

    And take advantage of new opportunities in flash, low power SSDs and other specialty DIMMs.

    All of these products provide high value memory solutions to our customer base in the dynamic storage and the cloud computing market and helps to create a foundation for strong growth for the company.

    Gail will now provide you with a more detailed financial update on the second quarter and six months results.

    Gail ?

    at the 11:30 minute mark:

    Gail Sasaki:

    Thanks Chuck and good afternoon everyone.

    As you (saw ? – unintelligible) on our release this afternoon, revenues for the second quarter ended July 2, 2011 were $16M up 72% when compared to $9.3M for the second quarter ended July 3, 2010 (Q2 2010) and up 33% from the $12M in revenue for the first quarter ended April 2, 2011.

    Gross profit for the second quarter ended July 2, 2011 was $4.9M or 31% of revenue, compared to a gross profit of $1.8M or 20% of revenues for the second quarter ended July 3, 2010.

    An increase of gross profit dollars of 172%.

    This improvement was due to the 72% increase in revenue and favorable DRAM cost environment as well as the increased absorption of manufacturing costs, as we (unintelligible) 101% more units than the year earlier quarter, with a 16% increase in the cost of factory labor and overhead.

    We continue to plan on a range between 25% to 30% for a gross profit percentage during the second half of this year.

    This will be dependent on the quarter’s and second half of the year’s product mix, DRAM cost and continued growth in unit production in each quarter.

    at the 12:50 minute mark:

    Net loss in the second quarter ended July 2, 2011 was $1.5M or $0.06 per share, compared to a net loss in the prior year period of $4M or $0.16 per share.

    These results include stock based compensation in the second quarter of $406,000 compared with $426,000 in the prior year period.

    And depreciation and amortization expense of $602,000 in the most recent quarter, compared with $552,000 in the year earlier period.

    at the 13:30 minute mark:

    Our cash-based loss after adding back these non-cash items was reduced to $503,000 which is an improvement of 83% over last year’s quarter.

    Revenues for the six months ended July 2, 2011 were $28M up 53% from revenues of $17.2M for the prior year period.

    Gross profit for the six months ended July 2, 2011 was $8.7M or 31% of revenues, compared to a gross profit of $3.6M or 21% of revenue for the six months ended July 3, 2010.

    Net loss for the six months ended July 2, 2011 was $4.3M or a $0.17 loss per share compared to a net loss in the prior period of $6.9M or $0.31 loss per share.

    These results include stock based compensation expense in the six months of $759,000 compare with $808,000 in the prior year period, and depreciation and amortization expense of $1,083,000 compared with $1,130,000 in the year earlier period.

    Total operating expenses decreased 4% to $6.3M from $6.6M in the previous consecutive quarter.

    The increase from $5.8M in the year earlier quarter was primarily due to an 18% increase in research and development (R&D) expense.

    From increased engineering headcount and material expenses related to product build (?) primarily related to HyperCloud and NVvault development.

    at the 15:00 minute mark:

    Sales and marketing expense increased by 8% from the previous year due to increased sampling and qualification activities, and investment of new headcount necessary to execute our vertical marketing strategy of engaging with end-user customers directly.

    Administrative expense decreased 15% from the year earlier quarter.

    Overall we do expect that total operating expenses may increase by 10-15% during the second half of the year, mainly driven by anticipated increases in next-generation HyperCloud and Vault engineering headcount and program (?).

    at the 15:39 minute mark:

    On the IP front, we continue to vigorously defend our patent rights in the U.S. Patent Office.

    As noted in earlier calls, these processes will run their course and we remain comfortable in our position and confident in the validity and enforceability of our patents.

    at the 15:57 minute mark:

    During the quarter the court dismissed the separate cases brought against us by Inphi and Ring Technologies.

    We did not record a benefit for income taxes for the second quarter ended July 2, 2011 as operating loss carry forward was fully reserved.

    On a go-forward basis we anticipate a rate of 0% until we begin to utilize our fully reserved net deferred tax asset.

    We ended the second quarter with cash, cash equivalents and investments and marketable securities totalling $12.1M, compared to $12.3M as of April 2, 2011.

    At the end of the quarter we have unutilized availability of $4.4M on our credit line.

    at the 16:43 minute mark:

    During the second quarter capital expenditures totalled $134,000 compared to $184,000 in the previous year’s quarter.

    We anticipate investment and equipment to support increased capacity and our new products over the next several months of approximately $500,000.

    at the 17:00 minute mark:

    As mentioned in previous calls, we continue to target financial breakeven by the end of the year.

    However, we may still be a net user of cash during the second half, as our accounts receivable and inventory continue to expand to support the increased revenue.

    at the 17:17 minute mark:

    We increased our investment in inventory during the second quarter in order to prepare for a backlog of orders shipping in the third quarter.

    However our target during the second half is to reduce inventory turnover to 45 days on hand.

    at the 17:35 minute mark:

    As you know from past calls, we have sufficient capacity on our current $15M line of credit for working capital needs.

    In addition, during the second quarter we received an infusion of cash due to a $3M term loan from our bank (unintelligible – to support ?) general growth needs.

    Thank you for listening in today.

    Operator we are now ready for questions.

    Question & Answer session ..

    at the 18:15 minute mark:

    Arnaub Chanda – Ross Capital Partners:

    Hi, can you hear me ?

    Gail Sasaki:

    Yes, Arnab.

    Arnaub Chanda – Ross Capital Partners:

    Thanks. Thanks Gail.

    The question .. I have really a couple of questions.

    One is .. you know if you look at the HyperCloud .. you know design wins or customer activity, it seems like it’s taken you know .. longer than you would anticipate when you first talked about the product.

    Can you talk a little bit about what the gating items are .. because it’s proprietary, or is it because you know there are other standards out there ?

    Can you describe sort of what your strategy is and what the alternatives are in front of customers ?

    And sort of what time frame do we think we can see you know adoption – do you need to have a second generation product. Thank you.

    Chuck Hong:

    Hi Arbab.

    On the issue of what remains at the customers’ in order to get qualification, I mention .. a lot of product level and in-system testing for long-term reliability.

    The product has taken longer to qualify.

    I think in terms of the timeline from here on out, I think we are on track now to get the testing completed and qualification finished to get on the customer’s approved vendor list and to have the technology and product get adopted by the .. by the major OEMs in time for the release of the Romley platform.

    at the 20:15 minute mark:

    In terms of what the competitive product is .. probably the main product out there is the LRDIMM (Inphi, IDTI make buffer chips for LRDIMM) – Load-Reduced DIMM.

    Which is .. a product that has, that is I believe is in the process of becoming qualified, but as we mentioned from the get-go, that product .. has we believe performance issues, such as high latency.

    Along with a special facilitation in the BIOS that has to be done in order for the product to operate properly on Romley.

    These are things that .. that we believe our product showed superior performance ..

    at the 21:15 minute mark:

    Angenie (?) – Needham & Co:

    Hello.

    Thank you for taking my question.

    I am calling on behalf of Rich Kugele.

    I just had a question on gross margins first off .. you had some commentary on the second half of this year and I missed that and I wonder if you could reiterate that ?

    Gail Sasaki:

    Sure.

    We are .. I basically stated that we are expecting a range of 25-30% for the second half of the year.

    Anthony (?) – Needham & Co:

    Ok, understood.

    And as far as the operating expenses go, I noticed that they are quite low on percentage of revenue basis .. just wondering if you think gross margin outperformance and strong performance as far as operating expense, is that going to be sustainable or (if you can) give any color on what to expect for the second half of the year.

    at the 22:11 minute mark:

    Gail Sasaki:

    I did mention earlier that we do believe that operating expenses could go up by 10-15% during the second half of the year.

    And I reiterate my position on the gross margins.

    Anthony (?) – Needham & Co:

    Understood.

    That finishes me up for today.

    Gail Sasaki:

    Ok. Thank you.

    at the 23:07 minute mark:

    Thank you for your involvement with NLST and we look forward to our call in the next quarter.

    Thank you.

  284. http://www.netlist.com/investors/investors.html
    Craig-Hallum 2nd Annual Alpha Select Conference
    Thursday, October 6th at 10:40 am ET

    http://wsw.com/webcast/ch/nlst/

    Participants:

    Gail Sasaki – NLST CFO
    Chris Lopes – NLST VP Sales and Co-Founder

    Gail Sasaki:

    Alright. Just going to get started.

    I’m Gail Sasaki and I am the CFO of Netlist and I want to introduce our speaker this morning – Chris Lopes – who is a veteran of Netlist of 11 years – been here from the beginning – helped to shape the company in many ways.

    As noted, he’s an engineer .. and good business person .. (unintelligible) with the company for 7 years .. so .. thanks Chris.

    at the 00:30 minute mark:

    Chris Lopes:

    Alright. We’ll condense about 40 minutes of material into .. 20 (minutes).

    That sound like a good deal ? That’s a bargain right ? Everyone likes a bargain.

    Our forward looking statements – (you) guy’s have all seen this – you’re all speed readers so that ..

    So who are we ?

    11 year old company. We’re a pure play into cloud computer – if you want to think of us that way – we have created $750M of sales in the last 11 years. Going public almost 5 years ago. November 2006.

    We are a global company. We do our design work in Irvine, CA and San Jose (CA). That is we have a design center there – we have sales offices around the country and in Europe and Asia and we have a large factory in Suzhou, China where we build our sub-systems.

    So we are a sub-systems company. What does that mean ?

    Means we build a big jigsaw puzzle that goes into a big system – typically a server or storage appliance.

    We deal with tier-1 customers. HP, IBM, DELL, VMC, Cisco, NetApp, FFIV – these are marquee customers – it’s a fairly consolidated market for us. And it takes a long time to get involved in (with) each of these customers.

    You can imagine the qualification requirements and investment on their end of resources.

    There are substantial barriers to getting involved with any of these companies and we’ve succeeded.

    at the 01:45 minute mark:

    Now, we have a couple of products that we will highlight today – really some game changing products – one for server, one for storage area.

    The first is called HyperCloud – that’s a DRAM based product.

    And the second is our NVvault, which is a combination of flash and DRAM.

    And we’ve got about 60 plus patents going.

    So if you look at cloud computing, you are seeing a lot of news on this – obviously iCloud (AAPL) is going to become much more prevalent, NetFlix now working out of cloud and of course now enterprises are now trying to figure out how not to spend a ton of money themselves and how to plug and pay for a service.

    at the 02:30 minute mark:

    We’ll focus on a couple of these areas – these are all driving high density architectures in the server space.

    And cloud server units are growing at about 20% a year (for the) next couple of years.

    at the 02:40 minute mark:

    So if you look at the market that WE play in – really 2 areas – the storage side which has a lot to do with flash-based and RAID control memory .. sorry NVvault. HyperCloud plays a little bit there and some battery-backed – we’ll talk a little bit about that towards the end of this presentation and we’ll focus the first part now on the larger market which is our HyperCloud and that’s a $4.3B market and growing.

    So we’ve got a pretty large market to play in.

    And if you look at performance in a server. There is a lot of talk now about tiered storage – lot of activity has gone on into the SSD space and PCI-SSD and you can see the access times.

    at the 03:25 minute mark:

    So think about it this way – a standard hard drive operates in access times of milliseconds. You can get about a thousand times faster by going to an SSD. And you get another thousand times faster again by keeping your memory in DRAM. And so we try to do a lot of work for our customers into the DRAM space to get the maximum performance.

    at the 03:35 minute mark:

    Talking about a couple of quick examples – MSC Software makes a product called NASTRAN. So it is finite-element modeling, computational fluid dynamics (CFD).

    So if you are building something big that moves, that has airflow or waterflow, you need to analyze how it’s going to operate – so in this example, MSC ran some very very large models and discovered that when they load the servers they use with our memory, they run 21% faster.

    So what does 21% mean ? That means it – does it mean it is worth a 21% premium ?

    Well no, it is worth a lot more than that in this case. In fact you’ll have an engineer – let’s say an aeronautical engineer running a model – typically took a day to run.

    So he loads his information, he runs it, he comes back tomorrow – he gets some results and he makes some decisions about design changes.

    at the 4:35 minute mark:

    When they run it with our memory, they run that same model – they get an answer the same day, they decide to modify and try another case and start that before they leave for the day. So when they come home they’ve effective.. when they come back in the office they’ve effectively doubled their workflow. They now have two models to analyze in the same time.

    at the 4:45 minute mark:

    This allows them to do things that I consider pretty important since I fly a lot.

    So for example would you rather get on an airplane that had been fully simulated or one guy simulates the first half and the other guy simulated the back half.

    Right so .. the ability to load very complex models into one system into large DRAM and run them makes better product.

    So that is a high performance computing application.

    at the 05:10 minute mark:

    One that might be a little closer to home for this audience – in the financial services industry. Imagine loading all your trade data – your tick database – right into RAM, so you can analyze it real-time and make decisions for trading.

    And that’s what large amounts of RAM do, especially high-speed RAM.

    So we are doing this in .. risk analytics. Example: what happens if there is another earthquake in Japan – how does it affect a particular stock .. how would I model that, how would I make fast decisions for doing that.

    So we have companies on Wall Street who are looking at using LARGE amounts of memory to enable this kind of activity.

    at the 05:55 minute mark:

    There is a fairly large demand – some call it “insatiable” – in the end-customer space. And there are a couple of key technologies that drive this.

    One you are very familiar with is multi-core technology that companies like INTC and AMD are producing.

    So as you go from Nehalem to Westmere to Romley and we go from 2 and 4 to 8 cores per CPU and on the AMD side – Magny-Cours and Interlagos now coming out with 16 cores.

    Every core increase benefits from more memory – per core. Which means the system itself needs more memory to hold that.

    So that is a big big driver.

    at the 06:30 minute mark:

    You have seen virtualization with the things that VMware is doing. It is putting 15 users on a machine and going to 20 and 25.

    And if you go to a lot of stock trading desks you can see the screens and they are all virtual desktops managed by a server in the background.

    at the 06:45 minute mark:

    And then cloud computing requirements today – people trying to do more in the cloud as you get more comfortable with it, they are running larger jobs.

    You can see things like airplane simulation being done in the cloud .. instead of your own machine at some point in the future.

    And companies like AMZN are working on empowering and developing the hardware and making that available.

    That is a big market for them.

    So the elasticity in being able to move and repartition some memory per user in the cloud is very important. Having a large DRAM space gives that flexibility.

    at the 07:15 minute mark:

    On the supply side there are some very big holes that need filling.

    One is silicon itself in the DRAM has a very difficult time migrating to next-generation technologies.

    at the 07:30 minute mark:

    The physics of DRAM preventing a fast scaling.

    It looks like 8Gbit DRAM maybe the lowest or the last monolithic die today.

    Today 4Gbit (not gigabytes) just hit the market. And it is an estimated $25B investment needed in the DRAM industry to get to the final lithographies needed to get to 8Gbit (DRAM) cost-effectively.

    It is really .. Samsung’s probably the only .. only player with the pockets to do that. ‘Cause they’re making money on Galaxy Tabs (Android tablet computer) and everything else seems like .. today.

    at the 08:00 minute mark:

    So the industry says we still need a solution. INTC’s got a problem, HP’s got a problem, IBM, AMD – all these big guys rely on large amounts of memory being available so that their servers can get to market and do what they’re supposed to do.

    at the 08:05 minute mark:

    But a couple of the alternate technologies today – one is our HyperCloud product.

    HyperCloud is a load-reduced and rank-multiplied technology.

    We’ll go into that in just a second.

    So there is several different pushes now in the industry – using that technology and 3DS is a 3-dimensional stacking of taking 4Gbit dies and stacking them 4-high and doing “through-silicon via” to connect those.

    That’s a great technology – hasn’t been perfected yet, but it still has loading and speed issues related to what we feel our technology can help overcome.

    And you are seeing SSDs increasingly being used to offload some of the memory and reduce some of that bottleneck for hard drives.

    at the 08:45 minute mark:

    Let’s look at the HyperCloud product.

    This is a 5.5 inch memory stick – if you have ever upgraded your memory yourself.

    In a desktop computer it looks very similar – same size as a socket it fits into.

    Now in a server there are 24 of these sockets that can be filled.

    So one server could hold, you know, $18-20,000 of HyperCloud memory.

    Right ? So it is a .. we’ll just take the cover off of it – that was a heat-spreader there.

    at the 09:25 minute mark:

    We make 2 custom pieces of silicon.

    And we spend a particularly large amount of R&D (research and development) dollars designing these chips.

    The first is a register device that ranks .. that multiplies the ranks available.

    So the system thinks it has 2 ranks to talk to memory.

    We can actually make a 4 rank memory look like 2 ranks – effectively doubling the amount of DRAM on any one DIMM.

    That gives us a cost advantage in some cases and certainly a performance advantage in most (cases).

    at the 09:35 minute mark:

    But without the isolation devices – there are 9 of those along the edge – that memory would slow the whole bus down .. to an unacceptable speed.

    So we need to compensate for the capacitive loading of all the additional chips by buffering it and isolating it from the system, which allows us to run these very large memory .. very fast.

    And that gives us the maximum speed of 1333MHz and .. think about this 3/4 of a Terabyte .. 768GB (gigabyte) in one server.

    So you can do a lot of work with that kind of .. data in RAM and not having to do disk access to go grab some models.

    at the 10:25 minute mark:

    If you are in oil and gas, (an) analysis company for example, you can load the full oil well into RAM and now analyze it.

    You know, I am told they can spend about a $100,000 a minute in analysis of whether they should keep drilling or not.

    So do you want to be the guy that can tell them in 20 minutes or in 2 minutes whether or not there is more oil to go there.

    So having large amounts of RAM really impacts what you can do.

    at the 10:50 minute mark:

    (We are) making this available in 16GB (gigabyte) and 32GB DIMM densities, which is the largest in the industry today.

    at the 11:00 minute mark:

    Our customers – you can see a couple of them here – HP .. increased server bandwidth capacity to enhance performance. SuperMicro .. unprecedented levels of performance. Viglen .. improved simulation times .. all about performance. No one wants to spend more money unless they get something for it. They get a lot for this. So we are seeing good play.

    at the 11:20 minute mark:

    Now, industry’s moving forward .. it always does .. DDR interface today is DDR3, we will go to DDR4 in about 2.5 years.

    Industry committees JEDEC (Joint Electronic Device Engineering Committee) of which we are part of, is already working on what are the interface standards for processors to talk to memory going forward.

    That’s called DDR4.

    There are several changes – lower voltages, higher speeds. And with speeds come loading problems and buffer problems .. buffer solutions are needed.

    at the 11:50 minute mark:

    On the top you can see what the industry is now pushing for DDR4 – it’s called the “distributed buffer architecture”.

    And below that you have what we have today, which happens to be called “a distributed architecture”.

    And so the HyperCloud distributed architecture is already a generation ahead of the rest of the industry.

    There are lots of patents covering this.

    There is a lot of interface between the register and the buffer chips.

    Took us a long time to work out – many years of fine tuning to get that done.

    So we feel we are very well positioned to carry this technology through DDR3 for the next couple of years and onto DDR4, where the market is REALLY projected to grow significantly in volume.

    at the 12:30 minute mark:

    So we have been doing this since 2004 – we stared work with AAPL on a rank-multiplied solution to solve a problem in their Xserve.

    And we came across a lot of need for innovation doing that and besides (we) filed some patents along the way – and that was back at DDR1.

    We did it for DDR2. We are doing it for DDR3. We’ll do it for DDR4.

    So across multiple channels, our multiple technologies .. we were able to solve these problems.

    Problems get more difficult .. the speed goes up .. the voltage levels go down .. you really need to know what you are doing in this space.

    at the 13:00 minute mark:

    We have 17 granted patents in this area alone.

    Another 30 in flight (?).

    So this is an area we guard very well – a lot of know-how as well as patents related to this.

    at the 13:15 minute mark:

    Let’s shift over now to the storage side.

    You’ve seen a lot of info in the market on SSDs – there’s over a hundred SSD manufacturers today.

    We make solid-state products that do several different functions.

    First one is – backup in RAID systems.

    So we started doing this work years and years ago when we had batteries backing up the RAID.

    And so this little card here is a cache memory for a RAID system – that’s a DDR2. That’s a 512MB or 1GB version.

    at the 13:45 minute mark:

    And we discovered that our customers don’t like batteries.

    In fact, batteries wear out. So how do we get rid of the batteries.

    We figured (out) a way to do that – mirroring flash and DRAM together with proprietary software or firmware to control that and a “supercapacitor” that holds it up to make the transition.

    at the 14:00 minute mark:

    So imagine you are working (on) your system and the power goes out in your building.

    You are plugged into the wall. You just lost whatever you were working on, right ?

    Not if you have a product like this in your system.

    at the 14:10 minute mark:

    Because it caches it and upon power-down, it takes whatever is in your RAM and moves it over into flash.

    Once it’s in flash, it doesn’t matter how much .. when you get power – it could be 10 years.

    But you’ll have the data.

    And we have enough power in that little pack (the “supercapacitor”) to transfer it over in about a minute .. is what it takes. So transfer’s over.

    at the 14:30 minute mark:

    Now it is not that important if you are working on a powerpoint or a spreadsheet, but if you are caching important data to a hard drive as a server, it’s EXTREMELY important that you have that protected.

    So this is a very big seller for us.

    And our customer said “well I don’t have a RAID system, but I certainly .. sure want that kind of application – so what can you do to make that available ?”.

    We did that with a product called ExpressVault – we built a complete card where we make an interface to the PCI Express (slot) – plugs right in to a standard system now – our card goes right on there, so it’s really an adapter card. That lets everybody use this function now – if they want.

    at the 15:10 minute mark:

    And that’s at DDR2 and customers said “well that’s great, but I want to go to DDR3″, so we made a DDR3 module.

    In this case we analyzed and figured out, if we can work directly on the memory bus, instead of through the PCI Express bus, we can get a tremendous throughput advantage.

    at the 15:20 minute mark:

    And so our customers said “yeah, that’s great, you better work with CPU manufacturers now”, so we’re doing that.

    The CPU guys and us are working together to enable this product to plug right into a memory socket and give you that instant backup capability.

    And that’s a combination of DRAM and flash (memory).

    You need the DRAM for the speed and you need the flash for the non-volatility, but you gotta have a way to move one to the other very quickly.

    And that’s proprietary and we do that very well.

    at the 15:45 minute mark:

    The company has shown 10 consecutive quarters of gross profit growth.

    Chart may not show it very well – the blue is revenue, and our margins are right now a little above 30% on continuing growth of revenue.

    So we’ve got a nice product mix that has a high margin.

    And nice track record for last 10 quarters.

    at the 16:05 minute mark:

    Our steady-state model says you take a 30% gross profit business, you spend about 15% of that in OpEx (operational expenditure) and you’ve got 15% for the bottom line.

    So we are moving towards that .. very soon .. we are moving towards breakeven here (some point on chart ?) this year.

    And we’re excited about where that goes next year as that whole HyperCloud 32GB (memory module) really takes off.

    at the 16:30 minute mark:

    Takeaways for you today:

    Customers – we deal with top-tier customers .. these are marquee names that are moving into cloud computing in a big way, or already leaders in storage or cloud computing servers.

    The trends in the server space – requiring more memory with multi-cores.

    Increased use of very sophisticated software, analytics, trade .. trading data as we talked about.

    Along with the .. not hesitancy, but the .. inability of the standard DRAM industry to meet those needs with large amounts of silicon, create quite an opportunity for us.

    We have strong IP position along high-density and load-reduction – so a lot of competitive barriers there.

    We’ve got some very interesting products related to flash and DRAM together – either boot-up, instant-save, constant-save, RAID-caching, as well as the HyperCloud high-density high-speed high-frequency, with low-latency, and we’ve got a team that’s been together for a (unintelligible) amount of time.

    Founders are still very active in the company – 11 years now.

    Most of our executive team’s been together for over 5 years, 7 and 8 years. So there is a pretty well established proven track record and there is significant management ownership in the company, still. So there is a lot of care.

    Open for questions.

    Yes.

    Question and Answer session:

    at the 17:50 minute mark:

    Question:

    (unintelligible)

    Chris Lopes:

    Well, the question is .. can we talk about design wins on the Romley platform.

    I can’t yet tell you who we’re qualified on.

    Romley has not been released yet. Won’t be till like .. looks like Q1 (2012).

    But I could tell you that we’re working with very large companies who build products and they are working on Romley qualifications.

    Our product performs very well with Romley.

    We have several companies – early platform, in our labs, validating that, as well as our own product at the labs of our customers where they are doing their own qualifications and validations today.

    Question

    at the 18:30 minute mark:

    (unintelligible)

    Chris Lopes:

    Yes. Every new process or family requires a re-qual (re-qualification) as well as every new density.

    So at Westmere, the highest density was 16GB.

    You get Romley, we are really talking 16GB and 32GB.

    32GB (memory modules) are just being released.

    So yeah .. our customers will have those to finish qualifications for Romley.

    Question:

    at the 19:00 minute mark:

    (unintelligible)

    Chris Lopes:

    Right.

    Question:

    (unintelligible)

    Chris Lopes:

    So the question is .. LRDIMM adoption .. the web 2.0 companies.

    LRDIMM is designed to work around (?) Romley.

    Requires a special BIOS – which evidently is not yet completed .. according to my customer sources.

    There is a special BIOS on Westmere that was kind of experimental to try to get early adoption – I don’t know anyone that’s shipping that.

    LRDIMM is really a next-generation product as well.

    I don’t believe that .. and I don’t have visibility complete visibility into everything those companies are doing.

    But it doesn’t seem it would make sense for them to use the Westmere for that.

    Any other questions.

    Ok, I thank you for your attention today ..

    Yes .. (another questioner emerges)

    Question:

    at the 20:05 minute mark

    (unintelligible)

    Chris Lopes:

    We’ve already modelled in a Q1 (2012) launch of Romley .. in our financials.

    So .. if it pushes beyond Q1 (2012) it will have, you know, impact to our growth, but our existing business (is) very steady .. steady-state .. not related to Westmere or Romley launches. It’s really where we grow .. in some of the new products.

    Especially the 32GB (HyperCloud memory modules).

    Yes, sir ..

    Question:

    at the 20:45 minute mark:

    (unintelligible)

    Chris Lopes:

    The question is .. are we as (unintelligible) on the storage side as we are on the server side with DRAM.

    Uh, the answer’s yes.

    Very limited competitive positioning from anyone else in this.

    Because it’s a mixed technology on the storage side .. with DRAM and flash.

    So just a few companies are working this space – mostly module sub-system manufacturers.

    And since we have such a good reach with large OEMs that we’ve been through – 4 and 5 year engagements to get through the quality and you know support requirements needed to do business with them.

    We have a big advantage because we are IN the customer and if that customer needs that product.

    The other companies that are trying to do that space really have never done business with many of these OEMs.

    Question:

    at the 21:40 minute mark:

    (unintelligible)

    Chris Lopes:

    We do, we make an mSATA product and a PCIe (PCI Express) product right now up to 128GB.

    These are embedded solid-state drives – they are more for industrial or for things like server boot-up.

    Since we are already working with large server guys this is already a pretty reach for us – where the competition there are people that are never heard of.

    We are not in the commodity consumer space for SSD – that’s where I mentioned there are a 100 companies doing that.

    There are some interesting companies out there – technologies that I think you need to .. you probably need your own controller to do that well.

    And to have a differentiated space.

    We are partnering with some controller companies today.

    And really finding some niches there .. as opposed to going after mainstream.

    at the 22:40 minute mark:

    So there is .. in the flash area you can look at .. we can make a lot of standard commodity SSDs (?) in (unintelligible) ..

    We make the NetVault NVvault product battery-backed replacement. We make that product available in the standard memory and also do some of the embedded stuff for mSATA interface as well as PCI.

    Yes, sir ..

    Question:

    at the 23:05 minute mark:

    (unintelligible)

    Chris Lopes:

    Well, we started (unintelligible) as a public .. public lawsuit that we have with GOOG, around violating our IP.

    So that is still pending and it’s been through many revisions and lots of lawyers and judges are involved in that.

    Other than that I don’t have a concern .. but I don’t have complete knowledge in what they are doing there.

    Question:

    at the 23:35 minute mark:

    (unintelligible)

    Chris Lopes:

    Inphi (IPHI). Good question. How is HyperCloud different from what IPHI is offering.

    IPHI is a chip company – so they build a register.

    The register is then sold to a memory company.

    And the memory company builds a sub-system with that.

    And that’s the module they are calling an LRDIMM or Load-Reduced DIMM.

    The difference is that the chip is one very large chip, whereas we have a distributed buffer architecture, so we have 9 buffers and one register.

    Our register fits in the same normal footprint of a standard register, so no architectural changes are needed there.

    at the 24:35 minute mark:

    And our distributed buffers allow for a 4 clock latency improvement over the LRDIMM.

    So the LRDIMM doubles the memory. HyperCloud doubles the memory.

    LRDIMM slows down .. the bus. HyperCloud speeds up the bus.

    So you get ours plugged in without any special BIOS requirement.

    So it plugs into a Westmere, plugs into a Romley, operates just like a register DIMM which is a standard memory interface that everyone of the server OEMs is using.

    The LRDIMM requires a special BIOS, special software firmware from the processor company to interface to it.

    And it’s slower.

    Does that answer your question ?

    Question:

    at the 25:20 minute mark:

    (unintelligible)

    Chris Lopes:

    Yes.

    You could look at it from an investment standpoint of let’s say there is 20M units of opportunity next year for HyperCloud or Load-Reduction DIMM (LRDIMM).

    Inphi is selling a chip into each one of those DIMMs for I don’t know $5-10 something like that.

    We are selling a module $100-200 to a $1000 depending on the density.

    So we (unintelligible) that’s why the sub-system space is very (laughs) exciting.

    We leverage the full bill of materials as well as we have to handle all of the interface issues that come up.

    If you think about it – I’ve used this analogy before .. most system manufacturers want to put together a puzzle with 5 big pieces of the jigsaw, not a 100.

    They don’t have time.

    To be one chip and then to rely on someone else to then put it together into a bigger piece and then rely on them to sell it and interface it is a long reach.

    We figure let’s build the bigger piece and make sure it fits right into our customer.

    Yes, sir ..

    Question:

    at the 26:30 minute mark:

    (unintelligible)

    Chris Lopes:

    Sure, from a competitive standpoint for HyperCloud, there’s really only two ways that we know today to get to the higher density.

    One is you stack DRAM and you slow the bus down to talk to that. As long as you can overcome the rank limitation.

    So .. so IPHI and I think there are one or two other companies (IDTI ?) trying to build the interface chips to do the load-reduction.

    But I think IPHI is the only one out in the market today .. is the primary guy out there.

    In terms of just making larger RDIMMs (registered DIMMs), standard RDIMMs, you look at the silicon companies themselves like Samsung, Micron and Hynix and when they will have 8Gbit technology available to build a standard RDIMM to then do what our product does with the 4Gbit technology.

    And some analysts are telling us that’s 2.5 to never in years (laughs) to when that happens.

    And they’ve got some challenges in doing that – besides the lithography of getting to 10nm, there is an interface change from DDR3 to DDR4.

    So how much money do you put into a DDR3 version of an 8Gbit (DRAM) if that market is going to shift to a new transit, new speed and new interface voltages, RIGHT when your chip will be available.

    at the 28:05 minute mark:

    So that would be kinda Samsung’s problem. Everybody else has just introduced 4Gbit and they are on a 2.5 to 3 year cycle for density.

    Even if they could, if they could overcome the technology challenges, TIME to get to 8Gbit is about a 2.5 year window.

    So we think we are very well positioned there.

    I think in the 16GB (16 gigabyte memory modules) we did not have this advantage.

    Because 4Gbit chips (DRAM) when you have plenty of 4Gbit chips – so they can get down in price to obviate the need for 2Gbit rank-doubled.

    So that cross-over is starting to happen already.

    We don’t see that cross-over happening again – at least for 2.5 years .. if ever (meaning newer higher density chips won’t become too cheap – in fact won’t even be available for 2.5 years).

    It IS a more exciting story today than it was when we introduced the product several years ago because of that.

    Yes, sir ..

    Question:

    at the 28:55 minute mark:

    (unintelligible)

    Chris Lopes:

    The question is will people accept slower speeds (i.e. mean LRDIMMs) for some other reasons.

    Sure. Applications that are NOT speed sensitive.

    So let’s say I need large amounts of density. I will sacrifice speed for the larger density.

    We’re not focused on that market.

    So I think there is a place for both of us .. to coexist.

    You know there are also areas where some servers don’t have as many sockets – so the loading isn’t an issue for them – they just want the largest density possible for that particular socket.

    And they don’t have a lot of sockets because there are space constraints and LRDIMM may work fine in those areas.

    Again, not a market that we are counting as part of our camp (?).

    I don’t think there is anything that the Load-Reduction DIMM (LRDIMM) does better than the HyperCloud.

    But I don’t know everything about it. There maybe maybe something that they can come out with soon (?).

    Those are great questions. Any other questions.

    Alright, I would like to thank you for your attention today.

    A pleasure speaking with you.

  285. Interesting and informative conference call, Netlist.

    I think it also explains some of the recent hardware patent purchases that Google has made recently as well, especially the section on Flash memory.

    Thanks.

  286. The NLST/GOOG legal fight is stuck at USPTO pending reexaminations etc.

    Same for NLST/Inphi. However Inphi has withdrawn Inphi vs. NLST (I suspect because court docs suggest NLST was challenging validity of two Inphi patents which might be a possible case of double patenting – which could weaken or invalidate two Inphi patents). Meanwhile, NLST vs. Inphi is still ongoing (again pending reexaminations).

    But NLST is making some progress on HyperCloud, but it’s main progress it seems has been in the storage space with their battery and supercapacitor backed memory-to-flash. Currently of use in RAID systems and storage stuff, but possibly could become mainstream (imagine a computer that doesn’t lose it’s info if power plug is pulled).

  287. Hi netlist,

    I know the Google litigation has stalled at this point, but I’m still a little staggered by the initial acquisition that I wrote about in this post with Google buying all the patents they could from Metaram. As I’ve noted in my comment a few days ago, Google has been acquiring a lot of new patents from IBM, from Quantum, and others that are more hardware based. Interestingly, a few of those have been in the area of flash memory, like you describe, with the ability to backup data in memory to a flash drive to save it if there is a power loss. I wouldn’t be surprised if Google started looking towards installing flash memory in the machines that run its data centers.

  288. Flash storage and flash as cache between RAM and hard disks. Some of NLST’s NVvault products are used in such stuff (like LSI’s CacheCade). NVvault has flash on-board the memory module which allows saving of RAM contents to flash in event of power failure (similar to battery-backed memory modules – but NVvault has a “supercapacitor” option so one can avoid using a battery).

    Regarding direct use of RAM, this is an interesting article someone posted on NLST yahoo board:

    http://www.wired.com/cloudline/2011/10/ramcloud
    The Quest for the Holy Grail of Storage … RAM Cloud
    Jon Stokes
    posted in Blog, Featured · 6:30 am

  289. Thanks for the link to that article, netlist.

    I’m going to have to revisit the new Google patents on flash memory that they acquired and see exactly what they cover, but I think that those are shared goals they have as well.

  290. NLST Q3 2011 earnings call transcript (not exact)

    http://www.netlist.com/investors/investors.html

    http://viavid.net/dce.aspx?sid=00008EC4
    Netlist Third Quarter, Nine-Month Results Conference Call
    Thursday, November 10, 2011 at 5:00 PM ET

    Participants:

    Chuck Hong – NLST CEO
    Gail Sasaki – NLST CFO
    Jill Bertotti – Allen & Caron (Investor Relations firm for NLST)

    Questions:
    Rich Kugele at Needham
    Jeff Martin of Rodd (?) Capital Partners

    Jill Bertotti:

    .. with that I would not like to turn the call over to Chuck.

    Good afternoon Chuck.

    at the 02:30 minute mark:

    Chuck Hong:

    Thanks Jill, and thank you all for joining us today.

    In the third quarter (Q3 2011) we continued the trend of improving financial performance by steady execution on our base business which includes the Vault family of products, flash and specialty DIMMs.

    We reported revenues of $16.3M up 55% over last year.

    And delivered an adjusted EBITDA breakeven for the quarter.

    We expanded our gross profits, and decreased our net loss by 79% year on year.

    Performance was driven by growth in the Vault product line, where we had outstanding revenue growth of 43% over the prior year.

    And by flash and SSD growth of 226%.

    We shipped $1M of HyperCloud products during the quarter.

    Our best quarter to date.

    at the 03:25 minute mark:

    During the quarter our 8GB and 16GB HyperCloud (memory) modules were qualified on Gigabyte’s high density server motherboard.

    Gigabyte is one of the top manufacturers of server motherboards and other computing hardware.

    As an example of the real world application and value-add of our technology, integrating HyperCloud on one of Gigabyte’s advanced server motherboards, enables 288GB of memory capacity running at 1333 mega transfers per secong (1333MT/s – often called “1333MHz” in discussions here).

    at the 04:00 minute mark:

    We also teamed up with Swift Engineering, the leading provider of high performance simulations for design in aircraft and racecars.

    Our HyperCloud 16GB memory modules made this possible, where Swift had run into technical limitations in the past.

    Swift has already published use cases and white papers showcasing HyperCloud’s advantages in computational fluid dynamics (CFD) simulations for aerodynamic design.

    While Swift is just one customer, it is a thought leader in the field, and as a result we are receiving inquiries from other firms that conduct complex simulation work.

    at the 04:40 minute mark:

    Also earlier today, we announced that we are running validations that are showing HyperCloud’s significant performance advantages for large data analytics workloads, when compared to industry standard memory on a standard 2-processor server running Sybase IQ, a financial services database.

    Those validations are showing that HyperCloud is delivering performance gains of up to 90% in equity trading applications, and delivers real profit opportunities to financial services firms.

    at the 05:15 minute mark:

    The qualification at Gigabyte and validation on Swift and Sybase applications are among the latest in a series of demonstrations on the benefits of HyperCloud over standard server memory.

    Starting earlier next .. uh .. early next year, these benefits will be brought to the mainstream server market with the release of next-generation servers based on Intel’s Romley processor.

    We expect HyperCloud to start shipping in volume with the launch of these servers by the major OEMs.

    at the 05:50 minute mark:

    While it is still early, we believe that due to the customer benefits, which we have articulated here and over the past year, HyperCloud will catch an unfair share of the market for high-end server memory.

    at the 06:00 minute mark:

    As we get closer to the launch of the new servers, we will be able to quantify, with more granularity, the scope of the volume ramp of HyperCloud in 2012.

    For now, suffice it to say that volumes from the mainstream server OEMs will be substantially larger than what we are seeing today on Westmere systems.

    And that we expect HyperCloud to drive a significant portion of our growing top line in 2012.

    at the 06:30 minute mark:

    Outside of the potential short term financial impact, I believe it is important to provide a perspective on what HyperCloud has accomplished since it was industry two years ago.

    Unlike I/O or other peripheral devices, memory, along with the CPU sits at the heart of the server and is therefore critical to the performance and reliability of the server.

    So it would be natural that server designers would be hesitant to experiment with a unique proprietary memory technology.

    In fact, in the history of servers, outside of a few in-house solutions, there has never been a proprietary memory technology that has been widely adopted by the mainstream server market.

    HyperCloud is the first.

    Once adopted, by the mainstream starting with the Romley-based systems, early next year, and deployed in a variety of applications by end-customers, it is a good bet that the technology will be supported by the OEMs for many years to come, and eventually become a permanent fixture in the server ecosystem.

    at the 07:40 minute mark:

    The JEDEC proposal earlier this year to use NLST’s patented distributed architecture for server memory at DDR4 is a clear indication that HyperCloud is the right technology path for server memory for years to come.

    at the 07:55 minute mark:

    In anticipation of that longer term vision, NLST continues to invest in R&D (research and development) for the next generation HyperCloud, working with OEM and silicon partners to create the highest performance server memory design in the world.

    at the 08:10 minute mark:

    In addition, we have continued to create patents and know-how in order to maintain our significant technological lead in the area of “rank multiplied, load reduced memory architecture”.

    at the 08:25 minute mark:

    In recent months, we have seen a series of positive developments that protect and extend our IP that surrounds HyperCloud.

    Including the recent receipt of our 7th patent in this area this year.

    at the 08:40 minute mark:

    The enormous potential of HyperCloud and it’s impact to NLST’s business, as well as for the rest of the industry, will be displayed and communicated at the annual Supercomputing conference SC’11 in Seattle next week.

    At the industry’s top venue, we plan to announce the launch of a number of key programs, and demonstrate breakthrough technologies.

    While I am unable to speak to these much .. to these in much detail today, the anticipated announcements will include a powerful new product platform, and landmark technology partnerships with industry leaders.

    We hope you will stay tuned in the coming days as we will (rollout ?) these programs at SC’11.

    at the 09:30 minute mark:

    Gail will now provide you a more detailed financial update for the quarter, as well as a high-level discussion about 2012.

    Gail ?

    Gail Sasaki:

    at the 09:40 minute mark:

    Thanks Chuck.

    Revenues for the third quarter ended October 1, 2011 (Q3 2011), were $16.3 up 55% when compared to $10.6M for the third quarter ended October 2, 2010 (Q3 2010).

    And a slight sequential increase over Q2 2011 of 2%.

    This flattish revenue between quarters was due to shortfall of flash and specialty DIMMs, some of which will get shipped in Q4 2011.

    at the 10:05 minute mark:

    Gross profit for the third quarter ended October 1, 2011 (Q3 2011) was $5.5M or 34% of revenues, compared to a gross profit of $3.0M or 29% of revenues for the third quarter ended October 2, 2010 (Q3 2010), an increase in gross profit dollars of 83% and a sequential increase of 12%.

    at the 10:30 minute mark:

    The quarter over quarter improvement was due to the 55% increase in revenue, a favorable DRAM cost environment, as well as the increased absorption of manufacturing costs as we produced 21% more units than the year earlier quarter.

    at the 10:45 minute mark:

    We expect our gross profit to range from 30% to 35% during the fourth quarter (Q4 2011) this year.

    Which will be dependent on the quarter’s product mix, production volume and DRAM cost.

    at the 10:55 minute mark:

    Adjusted EBITDA after adding back net interest expense, income taxes, depreciation and stock based compensation and net non-operating expense was $32,000 for the third quarter ended October 1, 2011 (Q3, 2011) compared to an adjusted EBITDA loss of $4.0M for the prior year period.

    at the 11:05 minute mark:

    Net loss in the third quarter ended October 1, 2011 (Q3 2011) was $1.0M or $0.04 loss per share, compared to a net loss in the prior year of $4.9M or a $0.20 loss per share.

    at the 11:30 minute mark:

    These results include stock based compensation in the third quarter of $464,000 compared with $413,000 in the prior year period.

    And a depreciation and amortization expense of $534,000 in the most recent quarter, compared with $561,000 in the year earlier period.

    Revenues for the nine months ended October 1, 2011 (Q3 2011) were $44.3M, up 60% from revenues of $27.8M for the prior year period.

    at the 12:00 minute mark:

    Gross profit for the nine months ended October 1, 2011 (Q3 2011) was $14.3M or 32% of revenue, compared to a gross profit of $6.7M or 24% of revenue for the nine months ended October 2, 2010 (Q3 2010).

    at the 12:20 minute mark:

    Adjusted EBITDA loss, after adding back net interest expense, income taxes, depreciation, stock based compensation and net non-operating expense was $2.2M for this first nine months ended October 1, 2011 (Q3 2011), compared to an adjusted EBITDA loss of $9.8M for the prior year period.

    at the 12:40 minute mark:

    Net loss for the nine months ended October 1, 2011 (Q3 2011) was $5.4M or $0.22 loss per share, compared to a net loss in the prior year period of $11.9M or a $0.51 loss per share.

    These results include stock based compensation expense of $1.2M for both periods, and depreciation and amortization expense of $1.7M for both periods.

    at the 13:05 minute mark:

    Total operating expenses were flattish at $6.5M (Q3 2011) from $6.3M in the previous consecutive quarter (Q2 2011).

    While we had expected this to increase for the second half of the year, we have been able to bring the development cost that we had planned for next-generation products below budget as our engineering team completed work ahead of schedule with less external resources.

    at the 13:30 minute mark:

    The decrease from $7.9M in the year earlier quarter (Q3 2010) was primarily due to a 20% decrease in research and development (R&D) expense related to an absence in 2011 of non-recurring engineering costs incurred in 2010 in association with next-generation introduction.

    at the 13:50 minute mark:

    Sales and marketing expenses decreased by 19% from the previous year due to improved efficiency and lower sample costs.

    at the 13:55 minute mark:

    Administration expense decreased 10% from the year earlier quarter.

    at the 14:00 minute mark:

    Overall we expect that total operating expenses will be slightly lower during the fourth quarter of the year.

    at the 14:10 minute mark:

    On the IP front we continue to vigorously defend our patent right in the USPTO.

    In October (2011), we received positive news via an office action in the ‘912 reexam, allowing 10 broad original claims and 10 new claims.

    As noted in earlier calls, these processes will run their course and we remain comfortable in our position and confident in the validity and enforceability of our patents.

    at the 14:35 minute mark:

    We did not record a benefit for income taxes for the third quarter ended October 1, 2011, plus operating loss carry forwards generated were fully reserved.

    at the 14:45 minute mark:

    On a go-forward basis, we anticipate a rate of zero percent until we begin to utilize our fully reserved net-deferred tax assets

    at the 14:55 minute mark:

    We ended the third quarter with cash, cash equivalents and investments in marketable securities totalling $10.6M compared to $12.1M as of July 2, 2011.

    At the end of the quarter, we have unutilized availability of $1.5M on our credit line.

    During the third quarter (Q3 2011), capital expenditures totalled $326,000 compare to the same number in the previous year’s quarter (Q2 2011).

    at the 15:20 minute mark:

    We anticipate investment in equipment to support increased capacity and our new products over the next several months of approximately $500,000.

    at the 15:25 minute mark:

    We were able to reach an adjusted EBITDA breakeven this quarter.

    However we may still be a net user of cash the next couple of quarters, depending on our cash cycle which increased by 12 days from Q2 2011.

    As you know from past calls, we believe we have sufficient capacity on our current $15M line of credit for working capital needs.

    at the 15:50 minute mark:

    At the end of the quarter, we filed a new S-3, that puts in place the shelf offering that COULD allow us to sell up to $40M in securities.

    In the event we utilize this shelf offering, it would be to fund anticipated HyperCloud ramp, and next-generation HyperCloud NVvault R&D (research and development), as well as to accelerate the commercialization of current products.

    at the 16:10 minute mark:

    Since our next call will not take place until the new year, we would like to wrap up our prepared remarks with some high level guidance for 2012.

    With the introduction and qualification of new products in the remainder of 2011, and throughout next year, we believe an increase of revenues of 50% to a 100% will be realizable for 2012.

    With the majority of that growth weighted towards the second half.

    at the 16:40 minute mark:

    In addition, we anticipate crossing over into GAAP profitability during Q2 2012 while we continue to invest aggressively in next-generation R&D.

    Thank you very much for listening in today.

    Operator we are now ready for questions.

    Question & Answer session ..

    at the 17:30 minute mark:

    Rich Kugele at Needham:

    Good afternoon, Chuck and Gail.

    Uh .. can you hear me ok ?

    Chuck Hong:

    Yeah, hi Rich.

    Gail Sasaki:

    Hi Rich.

    at the 17:35 minute mark:

    Rich Kugele at Needham:

    Hi .. um .. so uh uh uh a few questions and .. uh .. and I apologize for being slightly out of order here, because Gail you just said some pretty meaningful things.

    But um .. I’ll start actually with what I had prepared.

    Uhm .. you know I just want to talk a little bit about the LRDIMM market as it relates to Romley.

    And uh .. or the next-gen from Intel.

    Uhm .. can you just talk about how big that market is – I know that in recent months we’ve seen a few competitors actually exit that ..

    (NOTE: hey! wait a second – this is what we have been discussing i.e. as I have mentioned that IDTI deemphasizing over 3 consecutive conference calls and their earlier mention of TXN (Texas Instruments) being “not interested” in this space which might have links to NLST vs. TXN which was supposedly settled favorably to NLST)

    .. space .. uh .. from TI (Texas Instruments) and IDTI .. uh .. just outright ..

    (NOTE: voice almost breaking – their “analysis” been seriously flawed ignoring NLST – though it is to his credit that he is openly coming out on this though eventually had to)

    .. difficult time figuring out how many units that market actually is and how competitive the solutions are or aren’t.

    Um .. then I have some followups.

    at the 18:20 minute mark:

    Chuck Hong:

    Yeah Rich, I think there has been a handful of reports .. um .. that have been written up .. about the .. uh .. LRDIMM market.

    Uh .. that marketplace is the exact .. target market .. uh .. for HyperCloud.

    Uh .. that we’re targeting.

    Um .. there’s probably anywhere between 70M and 80M registered DIMM or server memory modules being shipped worldwide today.

    Uh .. those reports indicate that over time .. uh .. the LRDIMM .. uh .. may become 10-15% .. uh .. of that market.

    Um .. my my personal view is that it will probably NOT be that large.

    Uh .. the difference in .. uh .. uh .. the way that chip manufacturers, buffer manufacturers like an Inphi .. uh .. address that business opportunity is different from ours.

    They are selling a chipset that .. uh .. you know that is valued at $10-$20, whereas we are selling an entire memory module .. uh .. that is valued is valued at anywhere between $300-$400 up to $1200-$1500 depending on the density. Primarily it will be 16GB and 32GB.

    at the 19:50 minute mark:

    So .. we believe the market will be certainly in the millions of units .. uh .. come next year.

    With the LRDIMM and the HyperCloud .. um .. and .. uh .. at some point down the road as the Romley matures .. uh .. that it may .. the percentages may get into the teens (i.e. above 10%).

    For next year, I think it will be a smaller portion, but for us it’s still a a tremendous opportunity .. uh .. you know ..

    at the 20:25 minute mark:

    Rich Kugele at Needham:

    (here he interrupts Chuck Hong)

    From selling .. selling the module .. so much more .. than if you were just selling a chip .. right ?

    Chuck Hong:

    Absolutely. Absolutely.

    at the 20:35 minute mark:

    You know even at a let’s say if there are 70M RDIMMs being shipped today, registered DIMMs, and um .. the .. opportunity for the the high performance module is about a million units .. uh uh .. at an ASP (average selling price) of .. uh .. $500 let’s say.

    That’s a $500M market opportunity.

    Rich Kugele at Needham:

    Ok, uh .. and then I guess just lastly on .. on on some of Gail’s comments most recently there .. uh .. at the end of .. prepared remarks .. um .. it seems at 50%-100% of revenue growth year over year (i.e. in one year) that that would HAVE to be .. HyperCloud you know I guess you’re not breaking that out at this point.

    But you know maybe another way of approaching it is what do you expect the Vault business to do year over year and the more traditional memory side as well.

    at the 21:30 minute mark:

    Chuck Hong:

    Our plan shows that .. uh .. the revenues will increase throughout our product line with the exception of the PERC business, the battery-backed .. and the battery-free solution that is being shipped into DELL.

    That will be .. that will decline .. um .. with the introduction of their next-generation servers .. um .. next year.

    But with the exception of that, NetVault (NVvault), flash, speciality DIMMs including VLP (very low profile memory for blade servers) .. uh .. which will be shipped into .. uh .. a major OEM .. uh .. blade server .. uh .. and then HyperCloud.

    All of those will show .. uh .. double-digit growth in terms of revenues. All those segments.

    So that’s .. if you add that up, we ARE looking at revenue growth that is exceeding 50% for 2012.

    50% growth over this year.

    at the 22:50 minute mark:

    Rich Kugele at Needham:

    Ok, then just one last one um .. Gail what was the actual cash burn in the quarter or cash from operations cash used in operations ?

    Gail Sasaki:

    at the 22:55 minute mark:

    Um .. the cash from operations burn was under a $1M.

    And then we had a .. um .. some purchases of fixed assets in range of .. about $300,000.

    And then we had some jet service (?) about $400,000 (what is this jet service ?).

    Rich Kugele at Needham:

    Ok, great. Thank you very much.

    Gail Sasaki:

    Thank you Rich.

    at the 23:40 minute mark:

    Jeff Martin of Rodd (?) Capital Partners:

    Thanks. Good afternoon and thanks for taking my questions.

    Wanted to get a sense of .. whether the HyperCloud orders in the quarter were from previously announced vendors or from new vendors.

    And if .. uh .. if that pertains to Gigabyte and Swift .. uh .. could you clarify ?

    at the 24:00 minute mark:

    Chuck Hong:

    Jeff, can you repeat your question ? Sorry.

    Jeff Martin of Rodd (?) Capital Partners:

    Sure. The HyperCloud shipments in the quarter.

    I believe it was a $1M .. $1M of revenue.

    Were they from previously announced vendors or from .. uh .. from new vendors.

    Chuck Hong:

    Yes, previously announced customers.

    Jeff Martin of Rodd (?) Capital Partners:

    Ok .. and .. in terms of the application, were those mainly servers or storage ?

    Chuck Hong:

    Uh .. mostly .. uh .. servers.

    at the 24:35 minute mark:

    Jeff Martin of Rodd (?) Capital Partners:

    Ok, and then can you can you kind of give a sense of the opportunity and how many data centers are those customers running in and how large could these initial – I assume these are more on the initial order side of things – and how how much could those ramp .. um .. over 2012.

    at the 24:55 minute mark: