![]() XHost |
Oferim servicii de instalare, configurare si monitorizare servere linux (router, firewall, dns, web, email, baze de date, aplicatii, server de backup, domain controller, share de retea) de la 50 eur / instalare. Pentru detalii accesati site-ul BluePink. |
USB flash memory drives are experiencing an increase in product failures as a result of quality-control problems, and the wildly popular replacements for floppy disks could be facing other problems related to fragmentation, according to industry experts. Recent Gartner Inc. numbers indicate that 88.2 million USB flash drives were shipped in 2005, and 115.7 million will be shipped in 2006. While these portable nonvolatile storage units don't last forever, single-level cell NAND flash drives are commonly acknowledged to last for an average of 100,000 read-write cycles, which is an infinite amount for most users. However, according to Alan Niebel, a semiconductor analyst at Web-Feet Research Inc. in Monterey, Calif., fragmentation is becoming more of a threat, especially as USB flash memory sizes grow. "Flash disks will soon encounter fragmentation problems and a need to arrange the data in order to prevent problems," Niebel said. "Like mechanical disks, flash disks have their own technical limitations, so it will be wise to measure the fragmentation level on flash disks in order to avoid unnecessary writes on the media," he added. Koby Biller, founder of the Israeli software firm, Disklace Ltd., also believes USB flash drives need to be measured for fragmentation and then defragged before the damage to memory reaches a point of no return. A former systems engineer with IBM, Biller has 27 years experience working on a variety of IT systems. "It's like cholesterol, people don't measure it until their life spans start to be shortened," Biller said. According to Framingham, Mass.-based IDC, fragmentation occurs when documents are created and then saved or erased. When a file is first created and saved onto a hard drive or disk, it is stored in contiguous clusters. When the file is later recalled, the head, which reads the information, moves from one cluster to another on a single track. As files are added, they are also set in contiguous clusters. When files are erased, the cluster space they occupied becomes available and is filled as new files are created. When the new files are larger than the available contiguous space, the information in those files gets broken up and is randomly placed on the disk, and files start to become fragmented. Eventually, the situation deteriorates to the point where performance is severely impacted and files take disproportionate long times to open.
Accesed for 206 times.
[Click on any headline for the full story].