ライトアンプリフィケーション

出典: フリー百科事典『ウィキペディア(Wikipedia)』
移動: 案内検索
SSDにおいてはガベージコレクションウェアレベリングの結果としてライト・アンプリフィケーションが発生し、フラッシュチップへの書込が増加し寿命が短縮する。[1]

ライトアンプリフィケーション (WA:Write Amplification) は、フラッシュメモリFlash SSDにおいて、実際に書き込んだデータの量よりも、フラッシュチップに対する書込量が増加すると言う、好まざる現象の事を言う。

Because flash memory must be erased before it can be rewritten, the process to perform these operations results in moving (or rewriting) user data and metadata more than once.→フラッシュメモリに再書き込みを行うためには前もってその領域の消去が必要である。そのためにはデータやメタデータの移動や再書き込みが一度以上必要となる。 This multiplying effect increases the number of writes required over the life of the SSD which shortens the time it can reliably operate.→この動作増加作用は書込回数を増やし、SSDが信頼性を持って動作することのできる期間を短縮する。 The increased writes also consume bandwidth to the flash memory which mainly reduces random write performance to the SSD.→書込回数増加はまたフラッシュメモリへの帯域を消費し、SSDの主にランダムライトの性能を低下させる。[1][2] Many factors will affect the write amplification of an SSD; some can be controlled by the user and some are a direct result of the data written to and usage of the SSD.→多くの要因がSSDの書込回数増幅に影響する。一部はユーザーによるコントロールが可能であり、一部は書き込まれるデータやSSDの使用法そのものが原因となる。

Intel[3] and SiliconSystems (acquired by Western Digital in 2009)[4] used the term write amplification in their papers and publications as early as 2008. →インテルとウェスタンデジタルは2008年迄には論文や出版物に書込増幅という単語を使用していた。Write amplification is typically measured by the ratio of writes coming from the host system and the writes going to the flash memory.書込増幅は普通ホストよりの書込データ量とフラッシュメモリへ渡される書込データ量の比によって計られる。 Without compression, write amplification cannot drop below one. 圧縮無しには書込増幅は1を下回ることはない。Using compression, SandForce has claimed to achieve a typical write amplification of 0.5,[5] with best-case values as low as 0.14 in the SF-2281 controller.[6]サンドフォースは圧縮を用いることで通常の書込増幅0.5、SF-2281コントローラーを使用し最善の状況で0.14、を達成したと主張した。

Flash SSDにおける基本的処理[編集]

NAND フラッシュメモリの書込単位は4KiBのページごと、消去は256KiBのブロックごとである。[7]

フラッシュメモリの特質として、メモリにあるデータを直接「書き換える」こと(オーバーライト)が原理上不可能であり、これはハードディスクドライブとは異なっている。Flash SSD にデータが書き込まれるとき、該当するフラッシュメモリのセルはすべて「消去」されており、そのようなページ(通常サイズは4KiB)に対して一度にデータが書き込まれる。

The SSD controller on the SSD, which manages the flash memory and interfaces with the host system, uses a logical to physical mapping system known as logical block addressing (LBA) and that is part of the flash translation layer (FTL).[8]

When new data comes in replacing older data already written, the SSD controller will write the new data in a new location and update the logical mapping to point to the new physical location. The data in the old location is no longer valid, and will need to be erased before the location can be written again.[1][9]

Flash memory can only be programmed and erased a limited number of times. This is often referred to as the maximum number of program/erase cycles (P/E cycles) it can sustain over the life of the flash memory. Single-level cell (SLC) flash, designed for higher performance and longer endurance, can typically operate between 50,000 and 100,000 cycles. 2011年現在, multi-level cell (MLC) flash is designed for lower cost applications and has a greatly reduced cycle count of typically between 3,000 and 5,000. A lower write amplification is more desirable, as it corresponds to a reduced number of P/E cycles on the flash memory and thereby to an increased SSD life.[1]

WA値の計算[編集]

ライト・アンプリフィケーション(WA)は、その用語が定義されるよりも以前から知られていた事実であるが、2008年にはインテル[3][10]シリコンシステムズが、この用語を資料や広報に使い始めた。[4]

Flash SSD はWA値を持ち、それは次のような式で表される。[1][11][12][13]

あるSSDにおいてWA値を正確に測定するためには、ドライブが「安定状態」に達するために十分な時間、テスト書込が行われるべきである。[2]

WA値の計算式
\frac{\text{NAND}}{\text{HOST}} = \text{WA}

WA…WA値
NAND…フラッシュメモリへの書込データ量
HOST…ホスト側からの書込データ量

WA値に影響を及ぼす要素[編集]

多くの要素が、SSDのWA値に影響を及ぼす。次の表は、その主要要素がどのように影響するかを列挙している。変量的要素については、表において「比例」または「反比例」の関係を示している。例えばオーバー・プロビジョニングが増大すれば、WA値が減少する(反比例の関係)。要素が二値的(有効または無効)の場合、「正」または「負」の関係を示す。[1][8][11]

WA値に対する要素
要素 詳細 タイプ 関係*
ガベージコレクション 次に消去し書きなおすために最も適したブロックを選出するアルゴリズムの効率 変量 反比例 (good)
オーバー・プロビジョニング SSDコントローラーに割り当てられた予備領域(ユーザー領域外)の物理的容量の割合 変量 反比例 (good)
TRIM ガベージコレクションの際にSSDコントローラーにどのデータを破棄可能か通知するSATAコマンド 二値 正 (good)
ユーザー空き領域 ユーザー領域のうち、実データが無い空き領域の割合。これが有効な要素となるにはTRIMコマンドが必須。 変量 反比例 (good)
Secure Erase 全てのユーザーデータおよび制御用のメタデータを消去し、SSDを工場出荷時のパフォーマンスにリセットする。ガベージコレクションが再開されるまで有効。 二値 正 (good)
ウェアレベリング 全ブロックの書換回数を可能な限り平均化するためのアルゴリズムの効率 変量 比例 (bad)
静的データと動的データの分離 データをその変更頻度によりグループ分けする 二値 正 (good)
シーケンシャルライト 理論上はシーケンシャル書込はWA値が1となるが、他の要素によりWA値は変動する。 二値 正 (good)
ランダムライト 連続しない複数の論理ブロックアドレスへの書き込みは、WA値に最も大きい影響がある。 二値 負 (bad)
データ圧縮データ冗長性の削減 フラッシュメモリーに書き込まれる前に、除去された冗長性のあるデータの量 変量 反比例 (good)
*「関係」の定義
関係 詳細
比例 (bad) 要素が増大するとWA値が増大
反比例 (good) 要素が増大するとWA値が減少
正 (good) 要素が有効な場合にWA値が減少
負 (bad) 要素が有効な場合にWA値が増大

上記の要素のほか、リード・ディスターブ(en:Read_disturb)などの不良モード管理[14]もWA値に影響を及ぼす可能性がある(→#SSDにおけるガベージコレクション(GC))。

なお、SSDに対するデフラグメンテーション処理については、「SSDでのデフラグ」を参照。

SSDにおけるガベージコレクション(GC)[編集]

SSDにおけるガベージコレクション(GC)の典型的な動作。1ブロックが満杯になるまで、ページが書込される。すると、有効なデータのあるページはまとめて新しい別のブロックにコピーされ、旧ブロックは消去される。[7]

データは、複数の記憶セルで構成される「ページ」と言う単位でフラッシュメモリーに書き込まれる。しかし消去は、複数のページで構成される、より大きい「ブロック」と言う単位でのみ可能となる[7]。もしあるブロック内のあるページのデータが不要になった場合("stale"ページと呼ばれる)、そのブロック内の必要なデータがあるページのみ、他の消去済みブロックに転送(書込)される。[2] そして"stale"ページは転送(書込)されないため、転送先のブロックでフリーなページとして新しい別のデータを書き込むことができる。ここまでの処理を「ガベージコレクション(GC)」と呼ぶ[1][12]。全てのSSDは何らかのGCの仕組みを備えているが、GCをいつ、どのように処理するかはそれぞれ異なっている[12]。GCはSSDのライト・アンプリフィケーションに大きな影響がある[1][12]

読み込み動作については、フラッシュメモリーを消去する必要はないため、通常はライトアンプリフィケーションに関連付けられることはない。しかしリード・ディスターブ(en:Read_disturb)などの不良モード[14]が発生する前に、そのブロックは書き直しが行われる。もっとも、この事は、ドライブのライトアンプリフィケーション対する実質的な影響度は低いと見られている[15]

バックグラウンドGC[編集]

GCのプロセスはフラッシュメモリーの読込と再書込動作を伴う。つまりホストからの新しい書込指令によって、1ブロック全体の読込と、そのブロックのうち有効なデータを含む部分の書込、そして新しいデータの書込がまず必要となる。これはシステムの性能を著しく低減させる。[16] SSDのコントローラーには「バックグラウンドGC(BGC)」、「アイドル時GC(ITGC)」などと呼ばれる機能を備えるものがある。これは、コントローラーがSSDのアイドル時に、ホスト側から新しい書込データが来るよりも前に、フラッシュメモリーの複数のブロックを統合するものである。これによりデバイスの性能の低下を防ぐ事ができる。[17]

If the controller were to background garbage collect all of the spare blocks before it was absolutely necessary, new data written from the host could be written without having to move any data in advance, letting the performance operate at its peak speed. The trade-off is that some of those blocks of data are actually not needed by the host and will eventually be deleted, but the OS did not tell the controller this information. The result is that the soon-to-be-deleted data is rewritten to another location in the flash memory increasing the write amplification. In some of the SSDs from OCZ the background garbage collection only clears up a small number of blocks then stops, thereby limiting the amount of excessive writes.[12] Another solution is to have an efficient garbage collection system which can perform the necessary moves in parallel with the host writes. This solution is more effective in high write environments where the SSD is rarely idle.[18] The SandForce SSD controllers[16] and the systems from Violin Memory have this capability.[11]

ファイルシステムに着目したGC[編集]

In 2010, some manufacturers (notably Samsung) introduced SSD controllers that extended the concept of BGC to analyze the file system used on the SSD, to identify recently deleted files and unpartitioned space. The manufacturer claimed that this would ensure that even systems (operating systems and SATA controller hardware) which do not support TRIM could achieve similar performance. The operation of the Samsung implementation appeared to assume and require an NTFS file system.[19] It is not clear if this feature is still available in currently shipping SSDs from these manufacturers. Systematic data corruption has been reported on these drives if they are not formatted properly using MBR and NTFS.[20]

オーバー・プロビジョニング[編集]

SSDに見られる3段階のオーバー・プロビジョニング[16][21]

Over-provisioning (sometimes spelled as OP, over provisioning, or overprovisioning) is the difference between the physical capacity of the flash memory and the logical capacity presented through the operating system (OS) as available for the user. During the garbage collection, wear-leveling, and bad block mapping operations on the SSD, the additional space from over-provisioning helps lower the write amplification when the controller writes to the flash memory.[3][21][22][23]

The first level of over-provisioning comes from the computation of the capacity and the use of units for gigabyte (GB) where in fact it should be written as gibibyte (GiB). Both HDD and SSD vendors use the term GB to represent a decimal GB or 1,000,000,000 (10^9)bytes. Flash memory (like most other electronic storage) is assembled in powers of two, so calculating the physical capacity of an SSD would be based on 1,073,741,824 (230) per binary GB. The difference between these two values is 7.37% (=(230-109)/109). Therefore a 128 GB SSD with 0% over-provisioning would provide 128,000,000,000 bytes to the user. This initial 7.37% is typically not counted in the total over-provisioning number.[21][23]

The second level of over-provisioning comes from the manufacturer. This level of over-provisioning is typically 0%, 7%, or 28% based on the difference between the decimal GB of the physical capacity and the decimal GB of the available space to the user. As an example, a manufacturer might publish a specification for their SSD at 100 GB, 120 GB or 128 GB based on 128 GB of possible capacity. This difference is 28%, 7% and 0% respectively and is the basis for the manufacturer claiming they have 28% of over-provisioning on their drive. This does not count the additional 7.37% of capacity available from the difference between the decimal and binary GB.[21][23]

The third level of over-provisioning comes from end users to gain endurance and performance at the expense of capacity. Some SSDs provide a utility that permit the end user to select additional over-provisioning. Furthermore, if any SSD is set up with an OS partition smaller than 100% of the available space, that unpartitioned space will be automatically used by the SSD as over-provisioning as well.[23] Over-provisioning does take away from user capacity, but it gives back reduced write amplification, increased endurance, and increased performance.[18][22][24][25][26]

Over-provisioning calculation
\frac{\text{physical capacity}-\text{user capacity}}{\text{user capacity}} = \text{over-provision}

TRIMコマンド[編集]

TRIM is a SATA command that enables the operating system to tell an SSD what blocks of previously saved data are no longer needed as a result of file deletions or using the format command. When an LBA is replaced by the OS, as with an overwrite of a file, the SSD knows that the original LBA can be marked as stale or invalid and it will not save those blocks during garbage collection. If the user or operating system erases a file (not just remove parts of it), the file will typically be marked for deletion, but the actual contents on the disk are never actually erased. Because of this, the SSD does not know the LBAs that the file previously occupied can be erased, so the SSD will keep garbage collecting them.[27][28][29]

The introduction of the TRIM command resolves this problem for operating systems which support it like Windows 7,[28] Mac OS (latest releases of Snow Leopard, Lion, and Mountain Lion, patched in some cases),[30] and Linux since 2.6.33.[31] When a file is permanently deleted or the drive is formatted, the OS sends the TRIM command along with the LBAs that are no longer containing valid data. This informs the SSD that the LBAs in use can be erased and reused. This reduces the LBAs needing to be moved during garbage collection. The result is the SSD will have more free space enabling lower write amplification and higher performance.[27][28][29]

TRIMの制限と限界[編集]

The TRIM command also needs the support of the SSD. If the firmware in the SSD does not have support for the TRIM command, the LBAs received with the TRIM command will not be marked as invalid and the drive will continue to garbage collect the data assuming it is still valid. Only when the OS saves new data into those LBAs will the SSD know to mark the original LBA as invalid.[29] SSD Manufacturers that did not originally build TRIM support into their drives can either offer a firmware upgrade to the user, or provide a separate utility that extracts the information on the invalid data from the OS and separately TRIMs the SSD. The benefit would only be realized after each run of that utility by the user. The user could set up that utility to run periodically in the background as an automatically scheduled task.[16]

Just because an SSD supports the TRIM command does not necessarily mean it will be able to perform at top speed immediately after. The space which is freed up after the TRIM command may be random locations spread throughout the SSD. It will take a number of passes of writing data and garbage collecting before those spaces are consolidated to show improved performance.[29]

Even after the OS and SSD are configured to support the TRIM command, other conditions will prevent any benefit from TRIM. As of early 2010, databases and RAID systems are not yet TRIM-aware and consequently will not know how to pass that information on to the SSD. In those cases the SSD will continue to save and garbage collect those blocks until the OS uses those LBAs for new writes.[29]

The actual benefit of the TRIM command depends upon the free user space on the SSD. If the user capacity on the SSD was 100 GB and the user actually saved 95 GB of data to the drive, any TRIM operation would not add more than 5 GB of free space for garbage collection and wear leveling. In those situations, increasing the amount of over-provisioning by 5 GB would allow the SSD to have more consistent performance because it would always have the additional 5 GB of additional free space without having to wait for the TRIM command to come from the OS.[29]

ユーザー空き領域[編集]

The SSD controller will use any free blocks on the SSD for garbage collection and wear leveling. The portion of the user capacity which is free from user data (either already TRIMed or never written in the first place) will look the same as over-provisioning space (until the user saves new data to the SSD). If the user only saves data consuming 1/2 of the total user capacity of the drive, the other half of the user capacity will look like additional over-provisioning (as long as the TRIM command is supported in the system).[29][32]

Secure erase[編集]

The ATA Secure Erase command is designed to remove all user data from a drive. With an SSD without integrated encryption, this command will put the drive back to its original out-of-box state. This will initially restore its performance to the highest possible level and the best (lowest number) possible write amplification, but as soon as the drive starts garbage collecting again the performance and write amplification will start returning to the former levels.[33][34] Many tools use the ATA Secure Erase command to reset the drive and provide a user interface as well. One free tool that is commonly referenced in the industry is called HDDErase.[34][35] Parted Magic provides a free bootable Linux system of disk utilities including secure erase.[36]

Drives which encrypt all writes on the fly can implement ATA Secure Erase in another way. They simply zeroize and generate a new random encryption key each time a secure erase is done. In this way the old data cannot be read anymore, as it cannot be decrypted.[37] Some drives with an integrated encryption may require a TRIM command be sent to the drive to put the drive back to it original out-of-box state.[38]

ウェアレベリング[編集]

If a particular block were programmed and erased repeatedly without writing to any other blocks, the one block would wear out before all the other blocks, thereby prematurely ending the life of the SSD. For this reason, SSD controllers use a technique called wear leveling to distribute writes as evenly as possible across all the flash blocks in the SSD. In a perfect scenario, this would enable every block to be written to its maximum life so they all fail at the same time. Unfortunately, the process to evenly distribute writes requires data previously written and not changing (cold data) to be moved, so that data which are changing more frequently (hot data) can be written into those blocks. Each time data are relocated without being changed by the host system, this increases the write amplification and thus reduces the life of the flash memory. The key is to find an optimum algorithm which maximizes them both.[39]

静的データと動的データの分離[編集]

The separation of static and dynamic data to reduce write amplification is not a simple process for the SSD controller. The process requires the SSD controller to separate the LBAs with data which is constantly changing and requiring rewriting (dynamic data) from the LBAs with data which rarely changes and does not require any rewrites (static data). If the data is mixed in the same blocks, as with almost all systems today, any rewrites will require the SSD controller to garbage collect both the dynamic data (which caused the rewrite initially) and static data (which did not require any rewrite). Any garbage collection of data that would not have otherwise required moving will increase write amplification. Therefore separating the data will enable static data to stay at rest and if it never gets rewritten it will have the lowest possible write amplification for that data. The drawback to this process is that somehow the SSD controller must still find a way to wear level the static data because those blocks that never change will not get a chance to be written to their maximum P/E cycles.[1]

シーケンシャルライト[編集]

When an SSD is writing data sequentially, the write amplification is equal to one meaning there is no write amplification. The reason is as the data is written, the entire block is filled sequentially with data related to the same file. If the OS determines that file is to be replaced or deleted, the entire block can be marked as invalid, and there is no need to read parts of it to garbage collect and rewrite into another block. It will only need to be erased, which is much easier and faster than the read-erase-modify-write process needed for randomly written data going through garbage collection.[8]

ランダムライト[編集]

The peak random write performance on an SSD is driven by plenty of free blocks after the SSD is completely garbage collected, secure erased, 100% TRIMed, or newly installed. The maximum speed will depend upon the number of parallel flash channels connected to the SSD controller, the efficiency of the firmware, and the speed of the flash memory in writing to a page. During this phase the write amplification will be the best it can ever be for random writes and will be approaching one. Once the blocks are all written once, garbage collection will begin and the performance will be gated by the speed and efficiency of that process. Write amplification in this phase will increase to the highest levels the drive will experience.[8]

性能への影響[編集]

The overall performance of an SSD is dependent upon a number of factors, including write amplification. Writing to a flash memory device takes longer than reading from it.[17] An SSD generally uses multiple flash memory components connected in parallel to increase performance. If the SSD has a high write amplification, the controller will be required to write that many more times to the flash memory. This requires even more time to write the data from the host. An SSD with a low write amplification will not need to write as much data and can therefore be finished writing sooner than a drive with a high write amplification.[1][9]

製品における状況[編集]

In September 2008, Intel announced the X25-M SATA SSD with a reported WA as low as 1.1.[5][40] In April 2009, SandForce announced the SF-1000 SSD Processor family with a reported WA of 0.5 which appears to come from some form of data compression.[5][41] Before this announcement, a write amplification of 1.0 was considered the lowest that could be attained with an SSD.[17] Currently, only SandForce employs compression in its SSD controller.

出典[編集]

  1. ^ a b c d e f g h i j Hu, X.-Y. and E. Eleftheriou, R. Haas, I. Iliadis, R. Pletka (2009年). “[[[w:en:CiteSeer#CiteSeerX|CiteSeerX]]: 10.1.1.154.8668 Write Amplification Analysis in Flash-Based Solid State Drives]”. IBM. 2010年6月2日閲覧。
  2. ^ a b c Smith, Kent (2009年8月17日). “Benchmarking SSDs: The Devil is in the Preconditioning Details”. SandForce. 2012年8月28日閲覧。
  3. ^ a b c Lucchesi, Ray (2008–09). “SSD Flash drives enter the enterprise”. Silverton Consulting. 2010年6月18日閲覧。
  4. ^ a b Kerekes, Zsolt. “Western Digital Solid State Storage - formerly SiliconSystems”. ACSL. 2010年6月19日閲覧。
  5. ^ a b c Shimpi, Anand Lal (2009年12月31日). “OCZ's Vertex 2 Pro Preview: The Fastest MLC SSD We've Ever Tested”. AnandTech. 2011年6月16日閲覧。
  6. ^ Ku, Andrew (2012年2月6日). “Intel SSD 520 Review: SandForce's Technology: Very Low Write Amplification”. Tomshardware. 2012年2月10日閲覧。
  7. ^ a b c Thatcher, Jonathan (2009年8月18日). “NAND Flash Solid State Storage Performance and Capability – an In-depth Look”. SNIA. 2012年8月28日閲覧。
  8. ^ a b c d Hu, X.-Y. and R. Haas (2010年3月31日). “The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modelling”. IBM Research, Zurich. 2010年6月19日閲覧。
  9. ^ a b Agrawal, N., V. Prabhakaran, T. Wobber, J. D. Davis, M. Manasse, R. Panigrahy (2008年6月). “[[[w:en:CiteSeer#CiteSeerX|CiteSeerX]]: 10.1.1.141.1709 Design Tradeoffs for SSD Performance]”. Microsoft. 2010年6月2日閲覧。
  10. ^ Case, Loyd (2008年9月8日). “Intel X25 80GB Solid-State Drive Review”. 2011年7月28日閲覧。
  11. ^ a b c Kerekes, Zsolt. “Flash SSD Jargon Explained”. ACSL. 2010年5月31日閲覧。
  12. ^ a b c d e SSDs - Write Amplification, TRIM and GC”. OCZ Technology. 2012年11月13日閲覧。
  13. ^ Intel Solid State Drives”. Intel. 2010年5月31日閲覧。
  14. ^ a b http://pc.watch.impress.co.jp/docs/news/event/20110421_441051.html
  15. ^ TN-29-17: NAND Flash Design and Use Considerations”. Micron (2006年). 2010年6月2日閲覧。
  16. ^ a b c d Mehling, Herman (2009年12月1日). “Solid State Drives Take Out the Garbage”. Enterprise Storage Forum. 2010年6月18日閲覧。
  17. ^ a b c Conley, Kevin (2010年5月27日). “Corsair Force Series SSDs: Putting a Damper on Write Amplification”. Corsair.com. 2010年6月18日閲覧。
  18. ^ a b Layton, Jeffrey B. (2009年10月27日). “Anatomy of SSDs”. Linux Magazine. 2010年6月19日閲覧。
  19. ^ Bell, Graeme B. (2010年). “Solid State Drives: The Beginning of the End for Current Practice in Digital Forensic Recovery?”. Journal of Digital Forensics, Security and Law. 2012年4月2日閲覧。
  20. ^ SSDs are incompatible with GPT partitioning?!”. unknown閲覧。
  21. ^ a b c d Bagley, Jim (2009年7月1日). “Over-provisioning: a winning strategy or a retreat?”. StorageStrategies Now. p. 2. 2010年6月19日閲覧。
  22. ^ a b Drossel, Gary (2009年9月14日). “Methodologies for Calculating SSD Useable Life”. Storage Developer Conference, 2009. 2010年6月20日閲覧。
  23. ^ a b c d Smith, Kent (2011年8月1日). “Understanding SSD Over-provisioning”. flashmemorysummit.com. p. 14. 2012年12月3日閲覧。
  24. ^ Shimpi, Anand Lal (2010年5月3日). “The Impact of Spare Area on SandForce, More Capacity At No Performance Loss?”. AnandTech.com. p. 2. 2010年6月19日閲覧。
  25. ^ OBrien, Kevin (2012年2月6日). “Intel SSD 520 Enterprise Review”. Storage Review. 2012年11月29日閲覧。 “20% over-provisioning adds substantial performance in all profiles with write activity”
  26. ^ White Paper: Over-Provisioning an Intel SSD”. Intel (2010年). 2011年時点のオリジナルよりアーカイブ。2012年11月29日閲覧。
  27. ^ a b Christiansen, Neal (2009年9月14日). “ATA Trim/Delete Notification Support in Windows 7”. Storage Developer Conference, 2009. 2010年6月20日閲覧。
  28. ^ a b c Shimpi, Anand Lal (2009年11月17日). “The SSD Improv: Intel & Indilinx get TRIM, Kingston Brings Intel Down to $115”. AnandTech.com. 2010年6月20日閲覧。
  29. ^ a b c d e f g Mehling, Herman (2010年1月27日). “Solid State Drives Get Faster with TRIM”. Enterprise Storage Forum. 2010年6月20日閲覧。
  30. ^ Enable TRIM for All SSD’s [sic] in Mac OS X Lion”. osxdaily.com (2012年1月3日). 2012年8月14日閲覧。
  31. ^ Linux 2 6 33 Features”. kernelnewbies.org (2010年2月4日). 2010年7月23日閲覧。
  32. ^ Shimpi, Anand Lal (2009年3月18日). “The SSD Anthology: Understanding SSDs and New Drives from OCZ”. AnandTech.com. p. 9. 2010年6月20日閲覧。
  33. ^ Shimpi, Anand Lal (2009年3月18日). “The SSD Anthology: Understanding SSDs and New Drives from OCZ”. AnandTech.com. p. 11. 2010年6月20日閲覧。
  34. ^ a b Malventano, Allyn (2009年2月13日). “Long-term performance analysis of Intel Mainstream SSDs”. PC Perspective. 2010年6月20日閲覧。
  35. ^ CMRR - Secure Erase”. CMRR. 2010年6月21日閲覧。
  36. ^ How to Secure Erase Your OCZ SSD Using a Bootable Linux CD”. OCZ Technology (2011年9月7日). 2012年2月10日閲覧。
  37. ^ The Intel SSD 320 Review: 25nm G3 is Finally Here”. anandtech. 2011年6月29日閲覧。
  38. ^ SSD Secure Erase - Ziele eines Secure Erase”. Thomas-Krenn.AG. 2011年9月28日閲覧。
  39. ^ Chang, Li-Pin (2007年3月11日). “[[[w:en:CiteSeer#CiteSeerX|CiteSeerX]]: 10.1.1.103.4903 On Efficient Wear Leveling for Large Scale Flash Memory Storage Systems]”. National ChiaoTung University, HsinChu, Taiwan. 2010年5月31日閲覧。
  40. ^ Intel Introduces Solid-State Drives for Notebook and Desktop Computers”. Intel (2008年9月8日). 2010年5月31日閲覧。
  41. ^ SandForce SSD Processors Transform Mainstream Data Storage”. SandForce (2008年9月8日). 2010年5月31日閲覧。

外部リンク[編集]