Saturday 1 October 2016

Is flash chip capacity really limited to the powers of 2?


I have never seen any flash chips with capacity not confined to the strict (i.e. not like in hard drives) power of 2. I wonder what prevents manufacturers from creating such chips: is it some engineering reason, or just the marketing, or anything other?


I've came to this thought after examining some flash drives: the NAND flash inside is, for example, 8 GiB, and the drive itself has a size of 8 GB. Looks like the difference is used to compensate errors which appear quite often in MLC NANDs by replacing bad blocks with spare ones.


For example, plugging an SD card labeled as "512 MB" into my Linux box produces the following message:


sd 12:0:0:0: [sdb] 996352 512-byte logical blocks: (510 MB/486 MiB)


It is not formatting nor partitioning overhead: the full size of the device itself is much less than 512 MiB.


Also, I'm aware of the out-of-band (OOB) space present in all (MLC) NAND flashes. I don't consider it as "extra" or something because:



  1. It is almost always used to store the ECC codes and not the actual data; hence, the amount of information is same with or without OOB.

  2. The count of pages and blocks in entire flash is still power of 2, and moreover, amount of bytes in data and OOB areas of page on their own is power of 2, too. While the latter may be very well justified by minimizing overhead of addressing, the former is puzzling for me. We already have RAS and CAS, with one's address space bigger than other, and the matrix is already asymmetrical — why do it exactly in the power of 2?


So, what prevents a vendor from adding a bit more rows or columns? I'm mostly interested in NAND flashes, as they have an unified 8-bit address/data bus, and the number of bits in the address does not really matter (unlike for NOR ones).



Answer



The amount of silicon required to address less than 2n but more than 2n-1 pages/rows/columns/cells/etc is the same as the amount of silicon to address exactly 2n. So the silicon usage efficiency is best at 2n.


Further, if you intend to put several of them together in a parallel access scheme, you will end up with gaps if each chip doesn't address 2n.



There's little advantage to supporting sizes between 2n and 2n-1.


If you need to create a device with memory in the range 2n and 2n-1 then you will generally find that buying the 2n part is more cost effective than buying the 2n-1 part and a smaller part. Manufacturers have the same issue when producing silicon dies. Yes, they could make one, but it wouldn't increase their bottom line.


No comments:

Post a Comment

arduino - Can I use TI's cc2541 BLE as micro controller to perform operations/ processing instead of ATmega328P AU to save cost?

I am using arduino pro mini (which contains Atmega328p AU ) along with cc2541(HM-10) to process and transfer data over BLE to smartphone. I...