I'm right now combing the electrical engineering literature on the sorts of strategies employed to reliably produce highly complex but also extremely fragile systems such as DRAM, where you have an array of many millions of components and where a single failure can brick the whole system.
It seems like a common strategy that's employed is the manufacturing of a much larger system, and then the selective disabling of damaged rows/columns using settable fuses. I've read[1] that (as of 2008) no DRAM module comes off the line functioning, and that for 1GB DDR3 modules, with all of the repair technologies in place, the overall yield goes from ~0% to around 70%.
That's just one data point, however. What I'm wondering is, is this something that gets advertised in the field? Is there a decent source for discussing the improvement in yield compared to the SoA? I have sources like this[2], that do a decent job of discussing yield from first principles reasoning, but that's 1991, and I imagine/hope that things are better now.
Additionally, is the use of redundant rows/columns still employed even today? How much additional board space does this redundancy technology require?
I've also been looking at other parallel systems like TFT displays. A colleague mentioned that Samsung, at one point, found it cheaper to manufacture broken displays and then repair them rather than improve their process to an acceptable yield. I've yet to find a decent source on this, however.
Refs
[1]: Gutmann, Ronald J, et al. Wafer Level 3-d Ics Process Technology. New York: Springer, 2008. [2]: Horiguchi, Masahi, et al. "A flexible redundancy technique for high-density DRAMs." Solid-State Circuits, IEEE Journal of 26.1 (1991): 12-17.
No comments:
Post a Comment