HADOOP INSTALL SERIES: 7. Starting Hadoop/Hadoop components

HADOOP INSTALL SERIES: 7. Starting Hadoop/Hadoop components

The start-dfs.sh command, as the name suggests, starts the components necessary for br HDFS. This is the NameNode to manage the filesystem and a single DataNode to hold data. br br The SecondaryNameNode is an availability aid that we'll discuss in a later chapter. br br After starting these components, we use the JDK's jps utility to see which Java processes are br running, and, as the output looks good, we then use Hadoop's dfs utility to list the root of br the HDFS filesystem. br br After this, we use start-mapred.sh to start the MapReduce components—this time the br JobTracker and a single TaskTracker—and then use jps again to verify the result. br br There is also a combined start-all.sh file that we'll use at a later stage, but in the early br days it's useful to do a two-stage start up to more easily verify the cluster configuration.


User: Saqib24x7

Views: 44

Uploaded: 2014-06-18

Duration: 01:19

Your Page Title