Linux ETC

hadoop 1.x 설치 & 설정

2015.09.18 14:00

호스트웨이 조회 수:9007


1. 호스트 환경 설정 

- Hosts 변경

[root@node1 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.30.101.143 node1

10.30.101.145 node2

10.30.101.146 node3

[root@node1 ~]# 

- Java 설치 및 변경 (root권한)

- CentOS 6.x의 경우 openjdk 버전이 자동으로 설치

- Hadoop1.x 버전은 Java 1.6버전 이상에서 정상 구동

- [root@node1 ~]# rpm -qa |grep java

tzdata-java-2012c-1.el6.noarch

java-1.6.0-openjdk-1.6.0.0-1.45.1.11.1.el6.x86_64

[root@node1 ~]# 

- 기존 JDK제거 

[root@node1 ~]# 

[root@node1 ~]# mv /usr/bin/java /usr/bin/java-old

[root@node1 ~]# 


- 1.6.x 이상버전 JDK 설치

[root@node1 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz

[root@node1 ~]# tar zxvf jdk-8u45-linux-x64.tar.gz

[root@node1 ~]# mv jdk1.8.0_45/ /usr/local/ 

[root@node1 local]# ln -s jdk1.8.0_45/ java 

[root@node1 local]# vi /etc/profile

#환경변수 추가

export JAVA_HOME=/usr/local/java

export CLASSPATH=.:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

#환경변수 반영

[root@node1 local]# source /etc/profile 

[root@node1 local]# java -version

java version "1.8.0_45"

Java(TM) SE Runtime Environment (build 1.8.0_45-b14)

Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

[root@node1 local]# 

- 상기 설정은 모든 서버에서 동일 


2. SSH 공개키  생성 및 교환

- 공개키 생성

[root@node1 ~]# su - hadoop

[hadoop@node1 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

a5:df:d2:e4:15:b3:5e:c4:81:51:9e:49:ee:0c:ee:74 hadoop@node1

The key's randomart image is:

+--[ RSA 2048]----+

|             .++ |

|             .+.+|

|          .  .o=o|

|         o  . += |

|        S   .ooE.|

|         . =oo.. |

|          o +..  |

|           .     |

|                 |

+-----------------+

[hadoop@node1 ~]$ 

- 공개키 통합 파일 생성

[hadoop@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[hadoop@node1 ~]$ ssh hadoop@node2 cat ~/.ssh/id_rsa.pub >>.ssh/authorized_keys 

The authenticity of host 'node2 (10.30.101.145)' can't be established.

RSA key fingerprint is ad:ec:74:80:8d:ea:8a:07:5a:27:67:95:50:bc:87:0d.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,10.30.101.145' (RSA) to the list of known hosts.

hadoop@node2's password: 

[hadoop@node1 ~]$ ssh hadoop@node3 cat ~/.ssh/id_rsa.pub >>.ssh/authorized_keys 

The authenticity of host 'node3 (10.30.101.146)' can't be established.

RSA key fingerprint is ad:ec:74:80:8d:ea:8a:07:5a:27:67:95:50:bc:87:0d.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node3,10.30.101.146' (RSA) to the list of known hosts.

hadoop@node3's password: 

[hadoop@node1 ~]$ 

- 공개키 각 서버로 복사

[hadoop@node1 ~]$ scp -rp .ssh/authorized_keys hadoop@node2:~/.ssh/

hadoop@node2's password: 

authorized_keys                                                    100% 1576     1.5KB/s   00:00    

[hadoop@node1 ~]$ scp -rp .ssh/authorized_keys hadoop@node3:~/.ssh/

hadoop@node3's password: 

authorized_keys                                                    100% 1576     1.5KB/s   00:00    

[hadoop@node1 ~]$ 

- 공개키를 이용한 SSH 접속 테스트

[hadoop@node1 ~]$ ssh hadoop@node2

[hadoop@node2 ~]$ ssh hadoop@node3

[hadoop@node3 ~]$ 

3. Hadoop 설치 및 설정 변경

- Hadoop 다운로드

[hadoop@node1 ~]$ wget https://archive.apache.org/dist/hadoop/core/hadoop-1.2.1/hadoop-1.2.1.tar.gz

--2015-07-09 16:42:34--  https://archive.apache.org/dist/hadoop/core/hadoop-1.2.1/hadoop-1.2.1.tar.gz

Resolving archive.apache.org... 140.211.11.131, 192.87.106.229, 2001:610:1:80bc:192:87:106:229

Connecting to archive.apache.org|140.211.11.131|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 63851630 (61M) [application/x-gzip]

Saving to: “hadoop-1.2.1.tar.gz”

100%[===========================================================>] 63,851,630  4.70M/s   in 15s     

2015-07-09 16:42:51 (3.96 MB/s) - “hadoop-1.2.1.tar.gz” saved [63851630/63851630]

[hadoop@node1 ~]$ 

- 압축 해제 및 설정

[hadoop@node1 ~]$ tar zxvf hadoop-1.2.1.tar.gz 

[hadoop@node1 ~]$ ln -s hadoop-1.2.1 hadoop

[hadoop@node1 ~]$ ls

hadoop  hadoop-1.2.1 

[hadoop@node1 ~]$ 

- HADOOP_HOME 생성 및 경로를 PATH에 추가

[root@node3 hadoop]# vi /etc/profile

export HADOOP_HOME=/home/hadoop/hadoop

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

[root@node3 hadoop]# source /etc/profile

- Hadoop 운영에 필요한 기본 저장소 생성 (모든 서버)

[root@node1 hadoop]# mkdir ./hdfs

[root@node1 hadoop]# mkdir hdfs/pids

[root@node1 hadoop]# mkdir hdfs/namenode

[root@node1 hadoop]# mkdir hdfs/datanode

[root@node1 hdfs]# ls -la

total 20

drwxr-xr-x. 5 root   root   4096 Jul  9 16:52 .

drwx------. 5 hadoop hadoop 4096 Jul  9 16:50 ..

drwxr-xr-x. 2 root   root   4096 Jul  9 16:52 datanode

drwxr-xr-x. 2 root   root   4096 Jul  9 16:51 namenode

drwxr-xr-x. 2 root   root   4096 Jul  9 16:51 pids

[root@node1 hdfs]# 

- Hadoop 관련 디렉토리 접근 권한 확인 및 변경

하둡 관련 디렉토리들은 모두 소유자 외 쓰기 권한이 없어야 함

상기 과정을 통해 생성된 모든 디렉토리는 ‘rwxr-xr-x’권한이어야 함

그렇지 않은 경우 다음을 통해 권한을 변경

- 'core-site.xml 

[hadoop@node1 ~]$ vi hadoop/conf/core-site.xml 

<property>

<name>fs.default.name</name>

<value>hdfs://node1:9000</value>

</property>

- 'hdfs-site.xml'

[hadoop@node1 ~]$ vi hadoop/conf/hdfs-site.xml 

<property>

<name>dfs.name.dir</name> 

<value>/home/hadoop/hdfs/namenode</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>/home/hadoop/hdfs/datanode</value>

</property>

<property> 

<name>dfs.replication</name>

<value>3</value>

</property>

- 'mapred-site.xml'

[hadoop@node1 ~]$ vi hadoop/conf/mapred-site.xml

<name>mapred.job.tracker</name>

<value>node1:9001</value>

</property>

<property>

<name>mapred.tasktracker.map.tasks.maximum</name>

<value>4</value>

</property>

<property>

<name>mapred.tasktracker.reduce.tasks.maximum</name>

<value>4</value>

</property>

- 'master'

[hadoop@node1 ~]$ vi hadoop/conf/masters

node1 

- 'slaves'

[hadoop@node1 ~]$ vi hadoop/conf/slaves 

node2

node3

- 'hadoop-env.sh'

[hadoop@node1 ~]$ vi hadoop/conf/hadoop-env.sh 

export HADOOP_PID_DIR=/home/hadoop-user/hadoop/pids

- 환경 설정 파일 전송

[hadoop@node1 ~]$ scp -rp ./hadoop/conf/* hadoop@node2:~/hadoop/conf/

capacity-scheduler.xml                                             100% 7457     7.3KB/s   00:00    

configuration.xsl                                                  100% 1095     1.1KB/s   00:00    

core-site.xml                                                      100%  265     0.3KB/s   00:00    

fair-scheduler.xml                                                 100%  327     0.3KB/s   00:00    

hadoop-env.sh                                                      100% 2529     2.5KB/s   00:00    

hadoop-metrics2.properties                                         100% 2052     2.0KB/s   00:00    

hadoop-policy.xml                                                  100% 4644     4.5KB/s   00:00    

hdfs-site.xml                                                      100%  432     0.4KB/s   00:00    

log4j.properties                                                   100% 5018     4.9KB/s   00:00    

mapred-queue-acls.xml                                              100% 2033     2.0KB/s   00:00    

mapred-site.xml                                                    100%  449     0.4KB/s   00:00    

masters                                                            100%    6     0.0KB/s   00:00    

slaves                                                             100%   13     0.0KB/s   00:00    

ssl-client.xml.example                                             100% 2042     2.0KB/s   00:00    

ssl-server.xml.example                                             100% 1994     2.0KB/s   00:00    

taskcontroller.cfg                                                 100%  382     0.4KB/s   00:00    

task-log4j.properties                                              100% 3890     3.8KB/s   00:00    

[hadoop@node1 ~]$ scp -rp ./hadoop/conf/* hadoop@node3:~/hadoop/conf/

capacity-scheduler.xml                                             100% 7457     7.3KB/s   00:00    

configuration.xsl                                                  100% 1095     1.1KB/s   00:00    

core-site.xml                                                      100%  265     0.3KB/s   00:00    

fair-scheduler.xml                                                 100%  327     0.3KB/s   00:00    

hadoop-env.sh                                                      100% 2529     2.5KB/s   00:00    

hadoop-metrics2.properties                                         100% 2052     2.0KB/s   00:00    

hadoop-policy.xml                                                  100% 4644     4.5KB/s   00:00    

hdfs-site.xml                                                      100%  432     0.4KB/s   00:00    

log4j.properties                                                   100% 5018     4.9KB/s   00:00    

mapred-queue-acls.xml                                              100% 2033     2.0KB/s   00:00    

mapred-site.xml                                                    100%  449     0.4KB/s   00:00    

masters                                                            100%    6     0.0KB/s   00:00    

slaves                                                             100%   13     0.0KB/s   00:00    

ssl-client.xml.example                                             100% 2042     2.0KB/s   00:00    

ssl-server.xml.example                                             100% 1994     2.0KB/s   00:00    

taskcontroller.cfg                                                 100%  382     0.4KB/s   00:00    

task-log4j.properties                                              100% 3890     3.8KB/s   00:00    

[hadoop@node1 ~]$ 

- 네임노드 서버 및 분산 파일 시스템 초기화

*모든 노드에서 실행

[hadoop@node1 ~]$ hadoop namenode -format

- Hadoop 실행

[hadoop@node1 ~]$hadoop/bin/start-all.sh