DownLoad Hadoop 2.6.2
Change your eclipse theme
Creating user and Adding user to group
$ sudo apt-get install ssh $ sudo apt-get install rsync
My .bashrc (change it as per your need)
#HADOOP VARIABLES START export JAVA_HOME=/usr/lib/jvm/java-8-oracle export PATH=$PATH:/usr/lib/jvm/java-8-oracle/bin export HADOOP_INSTALL=/usr/local/hadoop export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native" export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar #HADOOP VARIABLES END
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost
If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
For running hadoop:
The following instructions are to run a MapReduce job locally. If you want to execute a job on YARN, see YARN on Single Node.
- Format the filesystem:$ bin/hdfs namenode -format
- Start NameNode daemon and DataNode daemon:$ sbin/start-dfs.sh
- Browse the web interface for the NameNode; by default it is available at:
- NameNode - http://localhost:50070/
- Make the HDFS directories required to execute MapReduce jobs:$ bin/hdfs dfs -mkdir /user $ bin/hdfs dfs -mkdir /user/<username>
- Copy the input files into the distributed filesystem:$ bin/hdfs dfs -put etc/hadoop inputNow setup and start Yarn to submit a job:You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition.The following instructions assume that 1. ~ 4. steps of the above instructions are already executed.
- Configure parameters as follows:etc/hadoop/mapred-site.xml:etc/hadoop/yarn-site.xml:
- Start ResourceManager daemon and NodeManager daemon:$ sbin/start-yarn.sh
- Browse the web interface for the ResourceManager; by default it is available at:
- ResourceManager - http://localhost:8088/
- Run a MapReduce job.
- When you're done, stop the daemons with:$ sbin/stop-yarn.shNow submit a job:
- $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar grep input output 'dfs[a-z.]+'
- Examine the output files:Copy the output files from the distributed filesystem to the local filesystem and examine them:$ bin/hdfs dfs -get output output $ cat output/*orView the output files on the distributed filesystem:$ bin/hdfs dfs -cat output/*
- When you're done, stop the daemons with:$ sbin/stop-dfs.shTry JPS command to see what all are running, it should show namenode, datanode, secondary namenode.If namenode not running then u need to format again (search for format in this blog).If datanode not running you will need to delete files in the tmp dirs or could be permission issues. To see errors try "hadoop datanode" command.