Thursday, April 28, 2011

Expanding Memory Options with Mobile Product Packaging

Keywords: MCP, SiP, SOC, 3D IC, TSV, Moore’s Law, Multi-Chip Packaging

Why is the activity concerning multi-chip packaging (MCP) technologies increasing?

SEMATECH announced this week that ASE, Altera, Analog Devices, LSI, Qualcomm, and ON Semiconductor have all joined SEMATECH's 3D Enablement program. These companies will join CNSE, GlobalFoundries, Hewlett Packard, Hynix, IBM, Intel, Samsung, and UMC in a broad initiative—and to enable industry infrastructure for TSV-based 3D stacked IC solutions. Also in support of wafer and tool standards for TSV technology, SEMI has three task groups within its 3D IC group, with the formation of a fourth task group under way.

Global Semiconductor Association (GSA) also has a 3D/TSV Technology Working Group. At the recent GSA Memory Conference highlighting 3D Architecture with Logic and Memory Integrated Solutions, one speaker forecasted that stacked multiple wide I/O DRAMs would appear by 2015 using TSV, while another speaker predicted that the TSV-based wide I/O DRAM would not arrive until “…the second half of the decade.” Given the latitude in defining the marketing terms, these two statements aren’t really contradictory. I concluded that we are just a few product development cycles away from the commercial acceptance of TSV stacked memory die products (as early as 2014), to be followed by continuing high-volume expansion of this form factor.

This continuing interest in multi-die packages results from the shift away from Desktop PCs and toward mobile devices as the dominant target applications for new technologies. Product development emphasis continues to shift toward nonvolatile memory and smaller form factors supported by the further empowerment of single-core processors per by Moore’s Law.

Memory technologies immediately enter into the equation for mobile devices anywhere there is are processors. The architectural question is whether the memory requirement is small enough with enough process compatibility to be embedded in in large processor die (up to ~70% of the area in some cases) or System On Chip (SOC), or whether the memory requirement is large, complex enough, or requires enough flexibility in performance attributes to be better suited for a multi-die configuration in TSV or System-in Package (SiP).

That transition opens up the processing R&D and packaging possibilities for memory technologies, as is clearly shown by the ever-expanding number of part numbers and configurations supported by “DRAM companies.”

The broader implication is that the range of memory performance attributes continues to increase as the semiconductor industry identifies a widening set of new applications as targets for the development of new memory performance attributes. This trend implies a wider selection of memory interfaces, packaging options, performance attributes, and densities will continue to be developed. In particular, this leads toward a wider set of performance attributes that include not only the usual speed, density, power consumption, and cost/bit tradeoffs, but we also expect to see other variable performance tradeoffs to extend to cell endurance, latency, non-volatility, compatibility with logic processes, and time-to-market for new configurations.

We follow this trend closely and have several reports listed on our website detailing the strength and pervasiveness of this trend, and market opportunities it presents.

www.convergentsemiconductors.com - Global Analysis of Memory Strategies and Issues 

Wednesday, April 20, 2011

Samsung/Seagate Announcement Highlights Stress Cracks in Computer Memory Hierarchy

Samsung and Seagate announced a broad strategic alignment today under which Samsung has transferred its hard disk drive operations to Seagate for an estimated $1.375 billion. Included in the agreement, in addition to extensive cross-licensing of existing patents, is a supply agreement under which Samsung will provide semiconductor memory for Seagate’s NAND Enterprise SSD, solid state hybrid drives, and other products.

This alignment reflects a general theme that we—along with Samsung and other observers of memory technologies—have supported for some time.

The general cost and performance guidelines under which memory technologies have developed in the past are now shifting. OEMs and the semiconductor industry are very familiar with the computing memory hierarchy in which the price and performance expectations for memory technologies have been based on the architectural proximity of the memory technology to the processing element. The driving element of this equation was the ability to keep the processing element as busy as possible, and the primary elements of cost/bit versus memory access time defined the values of the associated memory technologies. In terms of performance and cost-per-bit, the traditional top-to-bottom memory hierarchy cost/performance ratio ranked from SRAM, DRAM, non-volatile semiconductor memory, to hard disk drive.

There are two elements that have shifted relative to the original hierarchy. One element is the shift toward mobile computing applications, which naturally elevates the value of non-volatile memory technologies.

The second shift is the massive increase in the amount of data that is being created, transported, and stored, as Samsung and others have pointed out at numerous conferences. The resulting pressure on the computing infrastructure exerted by this increasing data load is causing multiple stress cracks in the traditional computing memory hierarchy. The requirements for lower power consumption and lower heat generation lead to the first major crack, which was the substitution of NAND for DRAM in server applications. That crack has now broadened with the Samsung/Seagate joint participation in Enterprise SSD semiconductor memory products.

We expect that future stress cracks at this and other levels in the traditional computing memory hierarchy will continue to create market entry points for other new memory solutions.

www.convergentsemiconductors.com - Global Analysis of Memory Strategies and Issues 

Sunday, April 10, 2011

Fast Clock versus Wide I/O Memory Interfaces

Presentations at the recent Global Semiconductor Alliance Memory Conference also highlighted another traditional memory-related discussion of the data transfer rate of faster clock architectures versus that of wider I/O architectures. During the morning break in presentations, we were asked which would be the highest performance interface architecture for new and emerging memory technologies.

The total data transfer rate performance is of course determined by two elements. The clock rate itself is one element in the calculation; the second element is the number of bits that are transferred at each clock “tick” of the device’s clock The typical shorthand method of only referring to the clock speed of an interface can sometimes be very misleading, and is only a useful measure of performance when comparing two similar architectures.

Both approaches can provide unique system-level values, and the final selection depends on the cost-performance requirements of each application. Wide I/O memory architectures with slower clock rates are usually selected for applications that require a larger number of memory components. In these configurations, system-level designers can achieve additional value from the reduced level of complexity that results from a slower clock rate.

On the other hand, smaller memory arrays that require a very fast data transfer rate can benefit from a different configuration. As explained in the literature for interfaces that utilize faster clock rates and narrower bus interfaces, the cost/performance value of these configurations increases as the number of memory components is reduced. Wide I/O architecture may actually carry an additional cost disadvantage in small memory array applications if the total number of components has to be increased simply in order to accommodate the wider bus width.

It is also useful to consider the end application itself. As long as the memory element of the OEM design continues to hold a secondary level of consideration relative to the processing element (as was common when the processor and operating system defined most of the performance in desktop PCs), other considerations have also been important in the past among OEMs. These other considerations have included such concerns as insuring the broadest base of suppliers, the availability of a wider range of supporting features from third-party suppliers in order to provide more OEM product differentiation, the re-use of existing OEM IP, and the ease of transition to the next generation product.

Specialty applications also have a different set of criteria than those of commodity-like applications. Potentially high-volume specialty memory applications emerging today in which the cost/performance of the memory significantly increases the value of the end product can be seen in the increased opportunities for non-volatile NAND Enterprise SSDs to replace DRAMs when power consumption becomes one of the overriding considerations.

This different set of criteria for new applications has profound significance when considering the most competitive interface architecture for new and emerging memory cell architectures. The market entry point for any new technology begins as a relatively small volume opportunity, yet could represent the opportunity to introduce a new and unique interface.

However, in a case where the cost/performance attributes are comparable between the two architectural concepts, the rate at which the number of memory bits per die is increasing also has be compared to the rate at which the total amount of memory bits in the system-level memory array is increasing. In other words, the system designer has to decide if the total number of memory components is likely to increase or decrease over time in the particular system-level design.

The rate at which a new memory technology progresses in density is particularly difficult to predict. That challenge is made more difficult by the fact that the initial entry point for any new memory technology may not be the highest volume opportunity once the technology becomes more established.

We believe that the most beneficial approach for new memory technologies is to keep all interface options open until the full potential of manufacturability and market acceptance has been demonstrated. New nonvolatile memory technologies that are replacing existing DRAM or NAND in existing applications may find easier market entry by following the protocol and pin assignments of one of the high-volume interfaces as closely as is practical.

Other opportunities to enable a completely new OEM configuration would provide more flexibility in creating a new and more efficient interface. It is in this case of enabling a new set of performance features that the memory architect has to once again consider the specific target applications in choosing between an interface with a fast clock and a narrow data path or an interface with a slower clock combined with a wider I/O interface.

www.convergentsemiconductors.com - Global Analysis of Memory Strategies and Issues 

Monday, April 4, 2011

A Lotta Yotta

At GSA’s Memory Conference on "3D Architecture with Logic and Memory Integrated Solutions" in San Jose last week, Samsung’s "Keynote Address: Rewriting the IT Power Equation" opened with a question to the audience to anyone who could identify the highest industry-recognized scientific notation. The correct answer is the “yotta,” which is 10^24.

Is a number of that magnitude one with which we should become familiar? Absolutely! The notation immediately below the yotta is the zetta (10^21). A zettabyte of information, or a billion terabytes if you prefer, is slightly less than the 1.2 zettabytes of new data estimated to have been created in 2010. By 2020, the amount of new data created in a single year is estimated to have increased to 35 zettabytes.

The notation immediately below the zetta is the exa (10^18). That notation is also of immediate importance because a 64-bit address space can “only” address up to 16 exabytes in existing architectures. Cisco estimates that global IP traffic in 2014 will reach 767 exabytes.

The significance of this much data is that the infrastructure necessary to create, transport, and store that much information has a growing impact on the data processing infrastructure—and particularly on the cost/performance ratio of memory technologies.

Most of us are familiar with the traditional computing memory hierarchy that describes the relative cost and performance of memory technologies as we move outward from high-performance cache memory at the closest physical and logical proximity to the processing element. As we move away from that position and toward the various levels of more remote data storage, the cost‑per‑bit and other performance requirements decline for memory technologies.

So what do we make of the recent introduction of lower-performance NAND replacing high-endurance and high-performance DRAM in a growing number of Enterprise SSD applications? And what about products such as Kaminario’s recently announced 12TB DRAM-based SSD?

The volume of data that is being processed, transported, and stored is beginning to shift the traditional cost/performance ratio of traditional semiconductor memory technologies. The lower power-consumption capabilities of NAND nonvolatile memory is already beginning to outweigh other performance considerations at this current level of 1.2 zettabytes of new data.

Our expectation is that this growth of data will continue to put pressure on system-level designers to find more opportunities for memory technologies to increase the value of their contribution to the overall system-level cost/performance. We believe that the opportunity for memory technologies to break away from the traditional image of the ultimate high-volume commodity product is rapidly approaching as new cost/performance opportunities are identified, and that system-level designers will continue to encourage new levels of technology experimentation and product differentiation among various memory technologies.

www.convergentsemiconductors.com - Global Analysis of Memory Strategies and Issues