• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            隨筆-379  評論-37  文章-0  trackbacks-0

            來自startup的垂直搜索引擎http://www.kosmix.com/的開源項目,又一個開源的類似google mapreduce 的分布式文件系統,可以應用在諸如圖片存儲、搜索引擎、網格計算、數據挖掘這樣需要處理大數據量的網絡應用中。與hadoop集成得也比較好,這樣可以充分利用了hadoop一些現成的功能,基于C++。

            Introduction

            Applications that process large volumes of data (such as, search engines, grid computing applications, data mining applications, etc.) require a backend infrastructure for storing data. Such infrastructure is required to support applications whose workload could be characterized as:

            • Primarily write-once/read-many workloads
            • Few millions of large files, where each file is on the order of a few tens of MB to a few tens of GB in size
            • Mostly sequential access

            We have developed the Kosmos Distributed File System (KFS), a high performance distributed file system to meet this infrastructure need.

            The system consists of 3 components:

            1. Meta-data server: a single meta-data server that provides a global namespace
            2. Block server: Files are split into blocks orchunksand stored on block servers. Blocks are also known as chunk servers. Chunkserver store the chunks as files in the underlying file system (such as, XFS on Linux)
            3. Client library: that provides the file system API to allow applications to interface with KFS. To integrate applications to use KFS, applications will need to be modified and relinked with the KFS client library.

            KFS is implemented in C++. It is built using standard system components such as, TCP sockets, aio (for disk I/O), STL, and boost libraries. It has been tested on 64-bit x86 architectures running Linux FC5.

            While KFS can be accessed natively from C++ applications, support is also provided for Java applications. JNI glue code is included in the release to allow Java applications to access the KFS client library APIs.

            Features
            • Incremental scalability: New chunkserver nodes can be added as storage needs increase; the system automatically adapts to the new nodes.
            • Availability: Replication is used to provide availability due to chunk server failures. Typically, files are replicated 3-way.
            • Per file degree of replication: The degree of replication is configurable on a per file basis, with a max. limit of 64.
            • Re-replication: Whenever the degree of replication for a file drops below the configured amount (such as, due to an extended chunkserver outage), the metaserver forces the block to be re-replicated on the remaining chunk servers. Re-replication is done in the background without overwhelming the system.
            • Re-balancing: Periodically, the meta-server may rebalance the chunks amongst chunkservers. This is done to help with balancing disk space utilization amongst nodes.
            • Data integrity: To handle disk corruptions to data blocks, data blocks are checksummed. Checksum verification is done on each read; whenever there is a checksum mismatch, re-replication is used to recover the corrupted chunk.
            • File writes: The system follows the standard model. When an application creates a file, the filename becomes part of the filesystem namespace. For performance, writes are cached at the KFS client library. Periodically, the cache is flushed and data is pushed out to the chunkservers. Also, applications can force data to be flushed to the chunkservers. In either case, once data is flushed to the server, it is available for reading.
            • Leases: KFS client library uses caching to improve performance. Leases are used to support cache consistency.
            • Chunk versioning: Versioning is used to detect stale chunks.
            • Client side fail-over: The client library is resilient to chunksever failures. During reads, if the client library determines that the chunkserver it is communicating with is unreachable, the client library will fail-over to another chunkserver and continue the read. This fail-over is transparent to the application.
            • Language support: KFS client library can be accessed from C++, Java, and Python.
            • FUSE support on Linux: By mounting KFS via FUSE, this support allows existing linux utilities (such as, ls) to interface with KFS.
            • Tools: A shell binary is included in the set of tools. This allows users to navigate the filesystem tree using utilities such as, cp, ls, mkdir, rmdir, rm, mv. Tools to also monitor the chunk/meta-servers are provided.
            • Deploy scrīpts: To simplify launching KFS servers, a set of scrīpts to (1) install KFS binaries on a set of nodes, (2) start/stop KFS servers on a set of nodes are also provided.
            • Job placement support: The KFS client library exports an API to determine the location of a byte range of a file. Job placement systems built on top of KFS can leverage this API to schedule jobs appropriately.
            • Local read optimization: When applications are run on the same nodes as chunkservers, the KFS client library contains an optimization for reading data locally. That is, if the chunk is stored on the same node as the one on which the application is executing, data is read from the local node.
            KFS with Hadoop

            KFS has been integrated with Hadoop using Hadoop’s filesystem interfaces. This allows existing Hadoop applications to use KFS seamlessly. The integration code has been submitted as a patch to Hadoop-JIRA-1963 (this will enable distribution of the integration code with Hadoop). In addition, the code as well as instructions will also be available for download from the KFS project page shortly. As part of the integration, there is job placement support for Hadoop. That is, the Hadoop Map/Reduce job placement system can schedule jobs on the nodes where the chunks are stored.

            參考資料:

            • distribute file system

            http://lucene.apache.org/hadoop/

            http://www.danga.com/mogilefs/

            http://www.lustre.org/

            http://oss.sgi.com/projects/xfs/

             

            http://www.megite.com/discover/filesystem

            http://swik.net/distributed+cluster

            • cluster&high availability

            http://www.gluster.org/index.php

            http://www.linux-ha.org/

            http://openssi.org

            http://kerrighed.org/

            http://openmosix.sourceforge.net/

             

            http://www.linux.com/article.pl?sid=06/09/12/1459204

            http://labs.google.com/papers/mapreduce.html

            posted on 2010-04-01 09:47 小王 閱讀(2010) 評論(2)  編輯 收藏 引用 所屬分類: 分布式系統

            評論:
            # re: kosmix,又一個開源的類似google mapreduce 的分布式文件系統 2010-04-01 12:55 | 那誰
            概念性的錯誤:mapreduce不是分布式文件系統,你說的應該是GFS.
              回復  更多評論
              
            # re: kosmix,又一個開源的類似google mapreduce 的分布式文件系統 2010-04-01 21:51 | 小王
            感謝那誰的指教,現題目已經改過
              回復  更多評論
              
            # re: kosmix,又一個開源的類似GFS的分布式文件系統 2010-05-10 12:51 | CANDYGonzales19
            Do not money to buy a house? Worry no more, just because it is real to take the <a href="http://lowest-rate-loans.com/topics/credit-loans">http://www.lowest-rate-loans.com</a> to solve such problems. Hence take a commercial loan to buy all you want.   回復  更多評論
              
            亚洲精品无码久久久久久| 亚洲国产天堂久久综合| 久久国产乱子伦免费精品| 国产精品九九九久久九九| 99久久婷婷国产综合精品草原| 色综合久久天天综合| 亚洲午夜无码久久久久小说| 久久超乳爆乳中文字幕| 久久精品无码av| 久久er热视频在这里精品| 亚洲伊人久久成综合人影院| 国内精品久久久久影院优| 亚洲国产精品综合久久网络| 国内精品久久久久影院优| 热99RE久久精品这里都是精品免费| 久久精品亚洲一区二区三区浴池| 久久久久久青草大香综合精品| 国产精品一久久香蕉国产线看观看| 日韩电影久久久被窝网| 久久电影网2021| 国产精品免费看久久久| 天天综合久久一二三区| 亚洲精品乱码久久久久久蜜桃图片| 久久久久久亚洲精品成人| 久久亚洲AV永久无码精品| 久久91综合国产91久久精品| 国产亚洲精品久久久久秋霞| 久久中文精品无码中文字幕| 香港aa三级久久三级| 久久精品国产亚洲AV嫖农村妇女| 天堂无码久久综合东京热| 青青草原综合久久大伊人导航 | 久久人人爽人人爽人人片av麻烦| 亚洲国产成人久久综合碰碰动漫3d| 久久精品国产精品亚洲毛片| 狠狠色丁香久久婷婷综合图片| 热综合一本伊人久久精品| 香蕉99久久国产综合精品宅男自 | 性高朝久久久久久久久久| 欧美日韩成人精品久久久免费看 | 亚洲国产成人久久综合区|