Installing HADOOP in Red Hat Linux / Oracle Linux

Want create site? Find Free WordPress Themes and plugins.

First Verify If Java is Installed or not

Before installing Java, make sure Java is available. Java is the primary requirement for running Hadoop on any system, So make sure you have Java installed on your system using following command.

[root@storage jdk1.8.0_111]# java -version

openjdk version “1.8.0_101”

OpenJDK Runtime Environment (build 1.8.0_101-b13)

OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)

[root@storage jdk1.8.0_111]#

java_versioin

If JAVA is not installed then follow this steps

Step 2: Creating Hadoop User

Create a normal user for hadoop from ROOT by using following command.

[root@storage opt]#  groupadd hadoop

[root@storage opt]# useradd -g hadoop hduser

[root@storage ~]# passwd hadoop

Changing password for user hadoop.

New password:

BAD PASSWORD: it is based on a dictionary word

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

Step 3: SET UP KEY BASED SSH

After creating account, it also required to set up key based ssh to its own account. To do this use execute following commands

[root@storage ~]# su – hadoop

[hduser@storage ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hduser/.ssh/id_rsa):

Created directory ‘/home/hduser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hduser/.ssh/id_rsa.

Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.

The key fingerprint is:

52:ab:8d:fa:b0:69:93:19:2a:b5:46:cd:f4:8a:07:b9 hduser@storage.castrading.com

The key’s randomart image is:

+–[ RSA 2048]—-+

|                 |

|                 |

|        .        |

|    .  . .       |

|   = .. S        |

|  = + .=         |

| o *.=o .        |

|. E B=           |

| o o+o.          |

+—————–+

[hduser@storage ~]$

[hduser@storage ~]$  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hduser@storage ~]$  chmod 0600 ~/.ssh/authorized_keys

ssh_rsa

Step 4: VERIFY KEY BASED SSH LOGIN

[hduser@storage ~]$ ssh localhost

The authenticity of host ‘localhost (::1)’ can’t be established.

RSA key fingerprint is 51:75:58:3a:65:e4:ac:ea:c3:bb:ba:94:00:7f:cc:f8.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘localhost’ (RSA) to the list of known hosts.

[hduser@storage ~]$ exit

logout

Connection to localhost closed.

[hduser@storage ~]$

Step 5: Download HADOOP

Now download hadoop 2.6.5 source archive file using below command.

 

[hduser@storage ~]$  pwd

/home/hduser

[hduser@storage ~]$ cd ~

[hduser@storage ~]$ wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz

Resolving www-eu.apache.org… 88.198.26.2, 2a01:4f8:130:2192::2

Connecting to www-eu.apache.org|88.198.26.2|:80… connected.

HTTP request sent, awaiting response… 200 OK

Length: 214092195 (204M) [application/x-gzip]

Saving to: “hadoop-2.6.5.tar.gz”

 

100%[===============================================================>] 214,092,195 45.0K/s   in 17m 15s

2017-01-06 15:20:18 (202 KB/s) – “hadoop-2.6.5.tar.gz” saved [214092195/214092195]

[hduser@storage ~]$

[hduser@storage ~]$  ls

hadoop-2.6.5.tar.gz

[hduser@storage ~]$  tar xzf hadoop-2.6.5.tar.gz

[hduser@storage ~]$ pwd

/home/hduser

[hduser@storage ~]$ ls

hadoop-2.6.5  hadoop-2.6.5.tar.gz

[hduser@storage ~]$ pwd

/home/hduser

[hduser@storage ~]$ ls

abc.txt            dataset    Downloads  hadoop-1.2.1  Music            pig-0.15.0         protobuf-2.5.0.tar.gz  scala         Templates

bashrc_sample.txt  Desktop    exit       hadoop-2.6.5  order_candidate  pig-0.15.0.tar.gz  Public                 scala-2.11.8  Videos

bbc.txt            Documents  hadoop     hadoop_store  Pictures         protobuf-2.5.0     research               Softwares

[hduser@storage ~]$ pwd

/home/hduser

[hduser@storage ~]$

hduser

Step 5: Configure Hadoop Pseudo-Distributed Mode  (Setup Environmental Variable)

First we need to set environment variable uses by hadoop. Edit ~/.bashrc file and append following values at end of file.

# .bashrc

# Source global definitions

if [ -f /etc/bashrc ]; then

. /etc/bashrc

fi

# Set Hadoop-related environment variables

#export HADOOP_HOME=/home/hduser/hadoop

export HADOOP_HOME=/home/hduser/hadoop-2.6.5

export HADOOP_INSTALL=/home/hduser/hadoop-2.6.5

#Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)

export JAVA_HOME=/usr/local/jdk1.8.0_111

export PATH=$PATH:$JAVA_HOME/bin

PATH=$PATH:$HOME/bin

export PATH

# Some convenient aliases and functions for running Hadoop-related commands

unalias fs &> /dev/null

alias fs=”hadoop fs”

unalias hls &> /dev/null

alias hls=”fs -ls”

# If you have LZO compression enabled in your Hadoop cluster and

# compress job outputs with LZOP (not covered in this tutorial):

# Conveniently inspect an LZOP compressed file from the command

# line; run via:

#

# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo

#

# Requires installed ‘lzop’ command.

#

lzohead () {

hadoop fs -cat $1 | lzop -dc | head -1000 | less

}

# Add Hadoop bin/ directory to PATH

export PATH=$PATH:$HADOOP_HOME/bin

# Add Pig bin / directory to PATH

export PIG_HOME=/home/hduser/pig-0.15.0

export PATH=$PATH:$PIG_HOME/bin

# User specific aliases and functions

export HADOOP_INSTALL=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”

export SCALA_HOME=/home/hduser/scala/

export PATH=$PATH:$SCALA_HOME:/bin/

export SQOOP_HOME=/home/hduser/Softwares/sqoop

export PATH=$PATH:$SQOOP_HOME:/bin/

Step 6: Now apply the changes in current running environment

[hduser@storage ~]$ source ~/.bashrc

[hduser@storage ~]$

Step 7: Edit HADOOP Environment and Set Java Home Environment

Now edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh file and set JAVA_HOME environment variable. Change the JAVA path as per install on your system.

[hduser@storage ~]$  vi $HADOOP_HOME/etc/hadoop/hadoop-env.sh (Make sure following yellow highlighted line is added. However I have copy and paste all the information located inside my .bashrc file)

export JAVA_HOME=/opt/jdk1.8.0_111/

# .bashrc

# Source global definitions

if [ -f /etc/bashrc ]; then

. /etc/bashrc

fi

 

# Set Hadoop-related environment variables

#export HADOOP_HOME=/home/hduser/hadoop

export HADOOP_HOME=/home/hduser/hadoop-2.6.5

export HADOOP_INSTALL=/home/hduser/hadoop-2.6.5

#Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)

 

export JAVA_HOME=/usr/local/jdk1.8.0_111

export PATH=$PATH:$JAVA_HOME/bin

PATH=$PATH:$HOME/bin

export PATH

 

 

# Some convenient aliases and functions for running Hadoop-related commands

unalias fs &> /dev/null

alias fs=”hadoop fs”

unalias hls &> /dev/null

alias hls=”fs -ls”

 

# If you have LZO compression enabled in your Hadoop cluster and

# compress job outputs with LZOP (not covered in this tutorial):

# Conveniently inspect an LZOP compressed file from the command

# line; run via:

#

# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo

#

# Requires installed ‘lzop’ command.

#

lzohead () {

hadoop fs -cat $1 | lzop -dc | head -1000 | less

}

 

# Add Hadoop bin/ directory to PATH

export PATH=$PATH:$HADOOP_HOME/bin

 

# Add Pig bin / directory to PATH

export PIG_HOME=/home/hduser/pig-0.15.0

export PATH=$PATH:$PIG_HOME/bin

 

# User specific aliases and functions

 

export HADOOP_INSTALL=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”

 

export SCALA_HOME=/home/hduser/scala/

export PATH=$PATH:$SCALA_HOME:/bin/

 

export SQOOP_HOME=/home/hduser/Softwares/sqoop

export PATH=$PATH:$SQOOP_HOME:/bin/

 

 

Tips : In order to find JAVA_HOME LOCATION you can type as following

[hduser@storage ~]$ readlink -f $(which java)
/opt/jdk1.8.0_111/bin/java

 

[hduser@storage ~]$ alternatives –display java | head -2

java – status is manual.

link currently points to /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java

 

[hduser@storage ~]$  chown -R hduser:hadoop hadoop

 

Step 8: Edit Configuration File

Hadoop has many of configuration files, which need to configure as per requirements of your hadoop infrastructure. Lets start with the configuration with basic hadoop single node cluster setup. first navigate to below location

configuration_new

[hduser@storage hadoop]$ cd $HADOOP_HOME/etc/hadoop

[hduser@storage hadoop]$ mkdir -p /app/hadoop/tmp

[hduser@storage hadoop]$ chown hduser:hadoop /app/hadoop/tmp

[hduser@storage hadoop]$

Make sure you are at following path. You need to go to this location (/home/hduser/hadoop-2.6.5/etc/hadoop) and edit .xml accordingly

configuration_new

Step 9: Edit core-site.xml (Make Sure to added following yellow highlighted text)

[hduser@storage hadoop]$ vi core-site.xml

<?xml version=”1.0″ encoding=”UTF-8″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

<!–

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

 

http://www.apache.org/licenses/LICENSE-2.0

 

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

–>

 

<!– Put site-specific property overrides in this file. –>

 

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://storage.castrading.com:9000</value>

<description>The name of the default file system.  A URI whose

scheme and authority determine the FileSystem implementation.  The

uri’s scheme determines the config property (fs.SCHEME.impl) naming

the FileSystem implementation class.  The uri’s authority is used to

determine the host, port, etc. for a filesystem.</description>

</property>

</configuration>

Step 10: Edit mapred-site.xml (Make Sure to added following yellow highlighted text)

[hduser@storage hadoop]$  cp mapred-site.xml.template mapred-site.xml

[hduser@storage hadoop]$ vi mapred-site.xml

<?xml version=”1.0″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

<!–

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

–>

<!– Put site-specific property overrides in this file. –>

<configuration>

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>storage.castrading.com:9001</value>

<description>The host and port that the MapReduce job tracker runs

  1. If “local”, then jobs are run in-process as a single map

and reduce task.

</description>

</property>

</configuration>

 

</configuration>

 

Step 11: Edit yarn-site.xml (Make Sure to added following yellow highlighted text)

[hduser@storage hadoop]$ vi yarn-site.xml

<?xml version=”1.0″?>

<!–

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

 

http://www.apache.org/licenses/LICENSE-2.0

 

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

–>

<configuration>

 

<!– Site specific YARN configuration properties –>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

 

</configuration>

 

Step 12: Edit hdfs-site.xml

The /usr/local/hadoop/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the cluster that is being used. It is used to specify the directories which will be used as the namenode and the datanode on that host.

 

Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation. This can be done using the following commands:,/p>

 

[hduser@storage hadoop]$ mkdir -p /home/hduser/hadoop_store/hdfs/namenode

 

[hduser@storage hadoop]$ mkdir -p  /home/hduser/hadoop_store/hdfs/datanode

 

[hduser@storage hadoop]$ chown -R hduser:hadoop /home/hduser/hadoop_store

 

[hduser@storage hadoop]$

 

[hduser@storage hadoop]$ cd /home/hduser/hadoop/etc/hadoop/

 

(Make Sure to added following yellow highlighted text)

[hduser@storage hadoop]$  vi hdfs-site.xml

 

<?xml version=”1.0″ encoding=”UTF-8″?>

<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

<!–

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

 

http://www.apache.org/licenses/LICENSE-2.0

 

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

–>

 

<!– Put site-specific property overrides in this file. –>

 

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

<description>Default block replication.

The actual number of replications can be specified when the file is created.

The default is used if replication is not specified in create time.

</description>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>/home/hduser/hadoop_store/hdfs/namenode</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>/home/hduser/hadoop_store/hdfs/datanode</value>

</property>

</configuration>

Step 13: Format NEW HADOOP FILESYSTEM

Now, the Hadoop filesystem needs to be formatted so that we can start to use it. The format command should be issued with write permission since it creates current directory under /usr/local/hadoop_store/hdfs/namenode folder:

 

[hduser@storage hadoop]$ hadoop namenode -format

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

 

17/01/06 19:14:03 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = storage/192.168.0.227

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 2.7.3

STARTUP_MSG:   classpath = /home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/junit-4.11.jar:/home/hduser/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/hduser/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hduser/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hduser/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hadoop/share/hadoop/common/lib/activation-1.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hduser/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hduser/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hduser/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hduser/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hduser/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/hduser/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/xz-1.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hduser/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hduser/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hduser/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/hduser/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/hduser/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/home/hduser/hadoop/share/hadoop/hdfs:/home/hduser/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/hduser/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/hduser/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/home/hduser/hadoop/contrib/capacity-scheduler/*.jar:/home/hduser/hadoop/contrib/capacity-scheduler/*.jar

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by ‘root’ on 2016-08-18T01:41Z

STARTUP_MSG:   java = 1.8.0_111

************************************************************/

17/01/06 19:14:09 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

17/01/06 19:14:09 INFO namenode.NameNode: createNameNode [-format]

17/01/06 19:14:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Formatting using clusterid: CID-bdd72531-6e00-4efd-86ff-5d8da839e5d5

17/01/06 19:14:10 INFO namenode.FSNamesystem: No KeyProvider found.

17/01/06 19:14:10 INFO namenode.FSNamesystem: fsLock is fair:true

17/01/06 19:14:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000

17/01/06 19:14:11 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true

17/01/06 19:14:11 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000

17/01/06 19:14:11 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jan 06 19:14:11

17/01/06 19:14:11 INFO util.GSet: Computing capacity for map BlocksMap

17/01/06 19:14:11 INFO util.GSet: VM type       = 64-bit

17/01/06 19:14:11 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB

17/01/06 19:14:11 INFO util.GSet: capacity      = 2^21 = 2097152 entries

17/01/06 19:14:11 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false

17/01/06 19:14:11 INFO blockmanagement.BlockManager: defaultReplication         = 1

17/01/06 19:14:11 INFO blockmanagement.BlockManager: maxReplication             = 512

17/01/06 19:14:11 INFO blockmanagement.BlockManager: minReplication             = 1

17/01/06 19:14:11 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2

17/01/06 19:14:11 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000

17/01/06 19:14:11 INFO blockmanagement.BlockManager: encryptDataTransfer        = false

17/01/06 19:14:11 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000

17/01/06 19:14:11 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)

17/01/06 19:14:11 INFO namenode.FSNamesystem: supergroup          = supergroup

17/01/06 19:14:11 INFO namenode.FSNamesystem: isPermissionEnabled = true

17/01/06 19:14:11 INFO namenode.FSNamesystem: HA Enabled: false

17/01/06 19:14:11 INFO namenode.FSNamesystem: Append Enabled: true

17/01/06 19:14:11 INFO util.GSet: Computing capacity for map INodeMap

17/01/06 19:14:11 INFO util.GSet: VM type       = 64-bit

17/01/06 19:14:11 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB

17/01/06 19:14:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries

17/01/06 19:14:11 INFO namenode.FSDirectory: ACLs enabled? false

17/01/06 19:14:11 INFO namenode.FSDirectory: XAttrs enabled? true

17/01/06 19:14:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384

17/01/06 19:14:11 INFO namenode.NameNode: Caching file names occuring more than 10 times

17/01/06 19:14:11 INFO util.GSet: Computing capacity for map cachedBlocks

17/01/06 19:14:11 INFO util.GSet: VM type       = 64-bit

17/01/06 19:14:11 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB

17/01/06 19:14:11 INFO util.GSet: capacity      = 2^18 = 262144 entries

17/01/06 19:14:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

17/01/06 19:14:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

17/01/06 19:14:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000

17/01/06 19:14:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10

17/01/06 19:14:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10

17/01/06 19:14:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25

17/01/06 19:14:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

17/01/06 19:14:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

17/01/06 19:14:12 INFO util.GSet: Computing capacity for map NameNodeRetryCache

17/01/06 19:14:12 INFO util.GSet: VM type       = 64-bit

17/01/06 19:14:12 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB

17/01/06 19:14:12 INFO util.GSet: capacity      = 2^15 = 32768 entries

17/01/06 19:14:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1267260243-192.168.0.227-1483701252055

17/01/06 19:14:12 WARN namenode.NameNode: Encountered exception during format:

java.io.IOException: Cannot create directory /home/hdfs/hadoop_store/hdfs/namenode/current

at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)

at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:564)

at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:585)

at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)

at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:992)

at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1434)

at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

17/01/06 19:14:12 ERROR namenode.NameNode: Failed to start namenode.

java.io.IOException: Cannot create directory /home/hdfs/hadoop_store/hdfs/namenode/current

at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)

at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:564)

at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:585)

at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)

at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:992)

at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1434)

at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

17/01/06 19:14:12 INFO util.ExitUtil: Exiting with status 1

17/01/06 19:14:12 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at storage/192.168.0.227

************************************************************/

 

If It is your first time installation then you can format data node as well.

 

[hduser@storage hadoop]$ hadoop datanode -format

 

Step 14: Now START HADOOP

Now it’s time to start the newly installed single node cluster. We can use start-all.sh or (start-dfs.sh and start-yarn.sh)

 

[hduser@storage hadoop]$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/03/06 18:02:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [storage.castrading.com]
storage.castrading.com: starting namenode, logging to /home/hduser/hadoop-2.6.5/logs/hadoop-hduser-namenode-storage.castrading.com.out
localhost: starting datanode, logging to /home/hduser/hadoop-2.6.5/logs/hadoop-hduser-datanode-storage.castrading.com.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hduser/hadoop-2.6.5/logs/hadoop-hduser-secondarynamenode-storage.castrading.com.out
17/03/06 18:02:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hduser/hadoop-2.6.5/logs/yarn-hduser-resourcemanager-storage.castrading.com.out
localhost: starting nodemanager, logging to /home/hduser/hadoop-2.6.5/logs/yarn-hduser-nodemanager-storage.castrading.com.out
[hduser@storage Desktop]$

Check the status as well :-

[hduser@storage Desktop]$ jps
20770 NameNode
21203 ResourceManager
21060 SecondaryNameNode
20870 DataNode
21640 Jps
21305 NodeManager
[hduser@storage Desktop]$

Did you find apk for android? You can find new Free Android Games and apps.
error: Please respect the Copyright of this Website ! Do not copy the information from this website.