Researchers at Berkeley have suggested that rather than bolting some amount of SRAM, organized as a cache, around a processor, one should add a processor core to a DRAM. They called this approach IRAM, for Intelligent RAM. Since processor and memory coexist on the same chip, extremely high bandwidth connections between them are possible and cheap. When the memory happens to be large enough to contain a complete application and its data, one can then imagine adding serial or narrow interfaces such as HyperTransport, RapidIO or Gigabit Ethernet, and thus avoid expensive, silicon-unfriendly parallel external interfaces. In such a situation, an IRAM chip would have interfaces only for power and network interconnect. A specialized version of the concept, a vector-processing. We summarize the pros and cons of IRAM:
IRAM pros and cons
Advantages
� High band-width (compared to traditional DRAM)
�Low latency (compared to traditional DRAM)
� Energy effectiveness.Compactness
Inconveniences
� Suitability of DRAM manufacturing process for processor (logic) implementation
� High-temperature operation of the chip (because of highperformance logic), making DRAM refresh cycles very frequent
� Limiting the memory size of a system to what fits on a single chip
� No flexibility in the processing power/memory capacity ratio
� Cost of manufacturing test
� Industry acceptance
These judgements arose from the IRAM project, and deserve some comment.
� DRAM processes optimize for different capabilities from the highperformance logic processes generally adopted for microprocessors. Therefore, the natural question is: is an IRAM built in a DRAM process (yielding markedly sub-optimal processor performance) or a logic process (yielding markedly sub-optimal DRAM)?
� One may circumvent the memory capacity limitation of pure IRAM by providing interfaces to external memory (perhaps serial, for cost reasons); but then some of the IRAM advantage is lost.
� With the pure IRAM approach, each chip has a well-defined processing and memory capacity. Changing this ratio implies changing the chip. The most natural system architecture for such a device is a loosely-coupled multiprocessor. Note that integrating a message-passing interface (for example, RapidIO) would not change the processing/capacity ratio. An SMP would mean major complicated changes to an IRAM chip, and would strongly affect performance (inter-chip bandwidths are unlikely to match within-chip bandwidths).
� A significant portion of the cost of a DRAM is manufacturing tests. Adding a processor must increase this cost noticeably, but an IRAM cost must be compared with the cost of a processor plus the cost of a memory.
� For an IRAM-like approach to succeed, it must be widely adopted. For this to occur, a major industrial gulf would likely have to be bridged�memory manufacturers (whose economics have long been controlled by commodity-like cost constraints) and microprocessor manufacturers (whose pricing reflects a belief in value-add through superior IP).
We should also note�reflecting the concerns of an earlier section�that an IRAM would likely offer a new instruction set architecture, and would thus run into many more barriers as well; it is possible that a Java-like approach might modify this harsh judgement. An IRAM must also choose to follow DRAM interface standards or not; either course of action will have cost and usability consequences.
And finally, we must note once again that any silicon technology which does not achieve wide adoption is destined to disappear.
Source of Information : Elsevier Server Architectures 2005
Home »
Server Architectures
» Intelligent Memory
Intelligent Memory
Jika Anda menyukai Artikel di blog ini, Silahkan
klik disini untuk berlangganan gratis via email, dengan begitu Anda akan mendapat kiriman artikel setiap ada artikel yang terbit di Creating Website
Label:
Server Architectures