Fast’13 几篇不错的Paper

jametong 发表于 2013年02月19日 17:39 | Hits: 3116
Tag: My Reading | deduplication | Fast 2013 | flash | flash cache

这里为Jeff Darcy 与 Robin Harris的博客汇集


SD Codes: Erasure Codes Designed for How Storage Systems Really Fail by James S. Plank, U of Tennessee, and Mario Blaum and James L. Hafner of IBM Research. RAID systems are vulnerable to a disk failures and unrecoverable read errors, but RAID 6 is overkill for UREs. The paper investigates lighter-weight erasure codes – disk plus sector, instead of 2 disks – to reclaim capacity for user data.

The StorageMojo take : high update costs make this most attractive for active archives, not primary storage. The capacity savings could extend the economic life of current RAID strategies vs newer erasure codes.

Gecko: Contention-Oblivious Disk Arrays for Cloud Storage by Ji-Yong Shin and Hakim Weatherspoon of Cornell, Mahesh Balakrishnan of Microsoft Research and Tudor Marian of Google. The limited I/O performance of disks makes contention a persistant problem on shared systems. The authors propose a novel log structured disk/SSD configuration and show that it virtually eliminates contention between writes, reads and garbage collection.

The StorageMojo take : SSDs help with contention, but they aren’t affordable for large-scale deployments. Gecko offers a way to leverage SSDs for log-structured block storage that significantly improves performance at a reasonable hardware cost.

Write Policies for Host-side Flash Caches by Leonardo Marmol, Raju Rangaswami and Ming Zhao of Florida International U., Swaminathan Sundararaman and Nisha Talagala of Fusion-io and Ricardo Koller of FIU and VMware. Write-through caching is safe but expensive. NAND’s non-volatile nature enables novel write-back cache strategies that preserve data integrity while improving performance. Thanks to large DRAM caches, read-only flash caches aren’t the performance booster they would have been even 5 years ago.

The StorageMojo take : Using flash only for reads mean ignoring half – or more – of the I/O problem. This needs to be fixed and this paper points the way.

Understanding the Robustness of SSDs under Power Fault by Mai Zheng and Feng Qin of Ohio State and Joseph Tucek and Mark Lillibridge of HP Labs. The authors tested 15 SSDs from 5 vendors by injecting power faults. 13 of the 15 lost data that should have been written and 2 of the 13 suffered massive corruption.

The StorageMojo take : We may be trusting SSDs more than they deserve. This research points out problems with still immature SSD technology.

A Study of Linux File System Evolution by Lanyue Lu, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau and Shan Lu of the University of Wisconsin. The authors analyzed 8 years of Linux file system patches – 5079 of them – and discovered, for instance, that

. . . semantic bugs, which require an understanding of file-system semantics to find or fix, are the dominant bug category (over 50% of all bugs). These types of bugs are vexing, as most of them are hard to detect via generic bug detection tools; more complex model checking or formal specification may be needed.

The StorageMojo take : Anyone building or maintaining a file system should read this paper to get a handle on how and why file systems fail. Tool builders will find some likely projects as well.


From my own perspective as a filesystem developer, here are some of my own favorites.

  • A Study of Linux File System Evolution(Lu et al, Best Paper [full length])
    Maybe not as much fun as some of the algorithmic stuff in other papers, but I’m very excited by the idea of making an empirical, quantitative study of how filesystems evolve. I’m sure there will be many followups to this.
  • Unioning of the Buffer Cache and Journaling Layers with Non-volatile Memory(Lee et al, Best Paper [short]).
    Devices that combine memory-like performance and byte addressability with persistence are almost here, and figuring out how to use them most effectively is going to be very important over the next few years. This paper’s observations about how to avoid double buffering between an NVM-based cache and an on-disk journal are worth looking into.
  • Radio+Tuner: A Tunable Distributed Object Store(Perkins et al, poster and WiP)
    It might be about object stores, but the core idea – dynamically selecting algorithms within a storage system based on simulation of expected results for a given workload – could be applied to filesystems as well
  • Gecko: Contention-Oblivious Disk Arrays for Cloud Storage(Shin et al).
    This is my own personal favorite, and why I’m glad I stayed for the very last session. They present a very simple but powerful way to avoid the “segment cleaning” problem of log-structured filesystems by using multiple disks. Then, as if that’s not enough, they use SSDs in a very intelligent way to boost read performance even further without affecting writes.




1. A Study of Linux File System Evolution


2.  关于缓存处理的几篇Paper

Write Policies for Host-side Flash Caches

Warming Up Storage-Level Caches with Bonfire

Unioning of the Buffer Cache and Journaling Layers with Non-volatile Memory



SD Codes: Erasure Codes Designed for How Storage Systems Really Fail

随着目前数据量的大规模增长,尤其是大规模垃圾数据的增长,如何通过有效的方式来确保数据的Reliability,以及有效的方式来节约数据的空间是热门的话题,进一步了解Erasure Code相关的算法、实现,还有挺有意思的。


HARDFS: Hardening HDFS with Selective and Lightweight Versioning



Concurrent Deletion in a Distributed Content-Addressable Storage System with Global Deduplication

File Recipe Compression in Data Deduplication Systems

Improving Restore Speed for Backup Systems that Use Inline Chunk-Based Deduplication



Extending the Lifetime of Flash-based Storage through Reducing Write Amplification from File Systems

Understanding the Robustness of SSDs under Power Fault


No related posts.

原文链接: http://www.dbthink.com/archives/791

0     0


可以不填写评论, 而只是打分. 如果发表评论, 你可以给的分值是-5到+5, 否则, 你只能评-1, +1两种分数. 你的评论可能需要审核.