- In the Hadoop conf directory edit core-site.xml,
add the following:
<property>
<name>fs.qfs.impl</name>
<value>org.apache.hadoop.fs.qfs.QuantcastFileSystem</value>
<description>The FileSystem for qfs: uris.</description>
</property>
- In the Hadoop conf directory edit core-site.xml,
adding the following (with appropriate values for
<server> and <port>):
<property>
<name>fs.default.name</name>
<value>qfs://<server:port></value>
</property>
<property>
<name>fs.qfs.metaServerHost</name>
<value><server></value>
<description>The location of the QFS meta server.</description>
</property>
<property>
<name>fs.qfs.metaServerPort</name>
<value><port></value>
<description>The location of the meta server's port.</description>
</property>
- Copy QFS's qfs-<version>.jar to Hadoop's lib directory. This step
enables Hadoop's to load the QFS specific modules. Note
that, qfs-<version>.jar was built when you compiled QFS source
code. This jar file contains code that calls QFS's client
library code via JNI; the native code is in QFS's
libqfs_client.so library.
- When the Hadoop map/reduce trackers start up, those
processes (on local as well as remote nodes) will now need to load
QFS's libqfs_client.so library. To simplify this process, it is advisable to
store libqfs_client.so in an NFS accessible directory (similar to where
Hadoop binaries/scripts are stored); then, modify Hadoop's
conf/hadoop-env.sh adding the following line and providing suitable
value for <path>:
export LD_LIBRARY_PATH=<path>
- Start only the map/reduce trackers
example: execute Hadoop's bin/start-mapred.sh