summaryrefslogtreecommitdiffstats
path: root/fs/ceph/crush/crush.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* ceph: factor out libceph from Ceph file systemYehuda Sadeh2010-10-211-180/+0
| | | | | | | | | | | | | | | | | | | | | | This factors out protocol and low-level storage parts of ceph into a separate libceph module living in net/ceph and include/linux/ceph. This is mostly a matter of moving files around. However, a few key pieces of the interface change as well: - ceph_client becomes ceph_fs_client and ceph_client, where the latter captures the mon and osd clients, and the fs_client gets the mds client and file system specific pieces. - Mount option parsing and debugfs setup is correspondingly broken into two pieces. - The mon client gets a generic handler callback for otherwise unknown messages (mds map, in this case). - The basic supported/required feature bits can be expanded (and are by ceph_fs_client). No functional change, aside from some subtle error handling cases that got cleaned up in the refactoring process. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: clean up header guardsSage Weil2010-08-021-2/+2
| | | | Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: make CRUSH hash function a bucket propertySage Weil2009-11-081-1/+2
| | | | | | | | Make the integer hash function a property of the bucket it is used on. This allows us to gracefully add support for new hash functions without starting from scatch. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: make CRUSH hash functions non-inlineSage Weil2009-11-071-10/+1
| | | | | | | These are way to big to be inline. I missed crush/* when doing the inline audit for akpm's review. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: CRUSH mapping algorithmSage Weil2009-10-061-0/+188
CRUSH is a pseudorandom data distribution function designed to map inputs onto a dynamic hierarchy of devices, while minimizing the extent to which inputs are remapped when the devices are added or removed. It includes some features that are specifically useful for storage, most notably the ability to map each input onto a set of N devices that are separated across administrator-defined failure domains. CRUSH is used to distribute data across the cluster of Ceph storage nodes. More information about CRUSH can be found in this paper: http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf Signed-off-by: Sage Weil <sage@newdream.net>