Sunday, April 10, 2011

Fast Clock versus Wide I/O Memory Interfaces

Presentations at the recent Global Semiconductor Alliance Memory Conference also highlighted another traditional memory-related discussion of the data transfer rate of faster clock architectures versus that of wider I/O architectures. During the morning break in presentations, we were asked which would be the highest performance interface architecture for new and emerging memory technologies.

The total data transfer rate performance is of course determined by two elements. The clock rate itself is one element in the calculation; the second element is the number of bits that are transferred at each clock “tick” of the device’s clock The typical shorthand method of only referring to the clock speed of an interface can sometimes be very misleading, and is only a useful measure of performance when comparing two similar architectures.

Both approaches can provide unique system-level values, and the final selection depends on the cost-performance requirements of each application. Wide I/O memory architectures with slower clock rates are usually selected for applications that require a larger number of memory components. In these configurations, system-level designers can achieve additional value from the reduced level of complexity that results from a slower clock rate.

On the other hand, smaller memory arrays that require a very fast data transfer rate can benefit from a different configuration. As explained in the literature for interfaces that utilize faster clock rates and narrower bus interfaces, the cost/performance value of these configurations increases as the number of memory components is reduced. Wide I/O architecture may actually carry an additional cost disadvantage in small memory array applications if the total number of components has to be increased simply in order to accommodate the wider bus width.

It is also useful to consider the end application itself. As long as the memory element of the OEM design continues to hold a secondary level of consideration relative to the processing element (as was common when the processor and operating system defined most of the performance in desktop PCs), other considerations have also been important in the past among OEMs. These other considerations have included such concerns as insuring the broadest base of suppliers, the availability of a wider range of supporting features from third-party suppliers in order to provide more OEM product differentiation, the re-use of existing OEM IP, and the ease of transition to the next generation product.

Specialty applications also have a different set of criteria than those of commodity-like applications. Potentially high-volume specialty memory applications emerging today in which the cost/performance of the memory significantly increases the value of the end product can be seen in the increased opportunities for non-volatile NAND Enterprise SSDs to replace DRAMs when power consumption becomes one of the overriding considerations.

This different set of criteria for new applications has profound significance when considering the most competitive interface architecture for new and emerging memory cell architectures. The market entry point for any new technology begins as a relatively small volume opportunity, yet could represent the opportunity to introduce a new and unique interface.

However, in a case where the cost/performance attributes are comparable between the two architectural concepts, the rate at which the number of memory bits per die is increasing also has be compared to the rate at which the total amount of memory bits in the system-level memory array is increasing. In other words, the system designer has to decide if the total number of memory components is likely to increase or decrease over time in the particular system-level design.

The rate at which a new memory technology progresses in density is particularly difficult to predict. That challenge is made more difficult by the fact that the initial entry point for any new memory technology may not be the highest volume opportunity once the technology becomes more established.

We believe that the most beneficial approach for new memory technologies is to keep all interface options open until the full potential of manufacturability and market acceptance has been demonstrated. New nonvolatile memory technologies that are replacing existing DRAM or NAND in existing applications may find easier market entry by following the protocol and pin assignments of one of the high-volume interfaces as closely as is practical.

Other opportunities to enable a completely new OEM configuration would provide more flexibility in creating a new and more efficient interface. It is in this case of enabling a new set of performance features that the memory architect has to once again consider the specific target applications in choosing between an interface with a fast clock and a narrow data path or an interface with a slower clock combined with a wider I/O interface.

www.convergentsemiconductors.com - Global Analysis of Memory Strategies and Issues