gongdear

gongdear的技术博客

欢迎大家参观我的博客
  menu
103 文章
89355 浏览
5 当前访客
ღゝ◡╹)ノ❤️

基于Centos7安装CDH6.3.0集群(一)

一 环境准备

这里是CDH对环境的要求:Cloudera Enterprise 6 Requirements and Supported Versions

首先根据需要创建几个虚拟机,我是直接在家里zstack上面创建了6个,分别是common 用来安装公共的数据库和时间同步服务等,上面有docker环境,然后两个master和三个node,配置如下:

image.png

在机器的系统都装好以后开始执行以下操作

1 生成公私钥

[root@commonbase ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:txYPTckMAK/5UphR3DQxmXMcM6Ou9GuVpV9mQX6dp18 root@commonbase
The key's randomart image is:
+---[RSA 2048]----+
|      .oo+*==.   |
|       o. =Bo= . |
|      . . .o= o o|
|       * . o  .++|
|      = S = .+ .+|
|       + + =+ . E|
|      . o +... =.|
|       . ...  . .|
|         ..      |
+----[SHA256]-----+
[root@commonbase ~]# 

2 互相配置信任免密码登陆

[root@commonbase ~]# ssh-copy-id root@cdhmaster1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'cdhmaster1 (192.168.1.201)' can't be established.
ECDSA key fingerprint is SHA256:/N1iHHWyB5NhH1QX1e2LKRAAJ2ficDjOgncyCOCmhNQ.
ECDSA key fingerprint is MD5:de:e0:e4:e9:f6:c2:e8:48:fb:81:e9:44:b1:46:e7:75.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@cdhmaster1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@cdhmaster1'"
and check to make sure that only the key(s) you wanted were added.

这里对自己也要copy 每个机器都操作一遍

3 关闭selinux

[root@cdhmaster1 ~]# vi /etc/sysconfig/selinux 

-> SELINUX=disabled

[root@cdhmaster1 ~]# scp /etc/sysconfig/selinux root@cdhmaster2:/etc/sysconfig/selinux
selinux                                                                                                 100%  542   856.9KB/s   00:00  
[root@cdhmaster1 ~]# scp /etc/sysconfig/selinux root@cdhnode1:/etc/sysconfig/selinux
selinux                                                                                                 100%  542   662.1KB/s   00:00  
[root@cdhmaster1 ~]# scp /etc/sysconfig/selinux root@cdhnode2:/etc/sysconfig/selinux
selinux                                                                                                 100%  542   797.2KB/s   00:00  
[root@cdhmaster1 ~]# scp /etc/sysconfig/selinux root@cdhnode3:/etc/sysconfig/selinux
selinux                                                                                                 100%  542   836.9KB/s   00:00  
[root@cdhmaster1 ~]# reboot

4 关闭防火墙

[root@cdhmaster1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@cdhmaster1 ~]# systemctl stop firewalld

5 设置hosts

将集群的Host的ip和域名配置到每台机器的 /etc/hosts
注意 hostname必须是一个FQDN(全限定域名),例如 commonbase.example.com,否则后面转到页面时在启动Agent时有个验证会无法通过。

[root@cdhmaster1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# home common
192.168.1.200	commonbase.example.com	commonbase

# home masters
192.168.1.201	cdhmaster1.example.com	cdhmaster1
192.168.1.202	cdhmaster2.example.com	cdhmaster2

# home nodes
192.168.1.211	cdhnode1.example.com	cdhnode1
192.168.1.212	cdhnode2.example.com	cdhnode2
192.168.1.213	cdhnode3.example.com	cdhnode3

[root@cdhmaster1 ~]# hostnamectl set-hostname cdhmaster1.example.com
[root@cdhmaster1 ~]# hostnamectl 
   Static hostname: cdhmaster1.example.com
         Icon name: computer-vm
           Chassis: vm
        Machine ID: ac9825d124e7483d8575a805cab735a9
           Boot ID: b37b57af573345a3be8744dbf9779b4b
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-1127.el7.x86_64
      Architecture: x86-64
[root@cdhmaster1 ~]# scp /etc/hosts root@commonbase:/etc/hosts
hosts                                                                                                   100%  479    19.9KB/s   00:00  
[root@cdhmaster1 ~]# scp /etc/hosts root@cdhmaster2:/etc/hosts
hosts                                                                                                   100%  479     3.5KB/s   00:00  
[root@cdhmaster1 ~]# scp /etc/hosts root@cdhnode1:/etc/hosts
hosts                                                                                                   100%  479    26.6KB/s   00:00  
[root@cdhmaster1 ~]# scp /etc/hosts root@cdhnode2:/etc/hosts
hosts                                                                                                   100%  479     0.5KB/s   00:01  
[root@cdhmaster1 ~]# scp /etc/hosts root@cdhnode3:/etc/hosts
hosts
[root@cdhmaster1 ~]# cat /etc/sysconfig/network
# Created by anaconda
HOSTNAME=cdhmaster1.example.com

5 配置ntp

5.1 ntp所所有机器配置

[root@commonbase etc]# cat /etc/sysconfig/ntpd
# Command line options for ntpd
OPTIONS="-g"
SYNC_HWCLOCK=yes
[root@commonbase etc]# cat /etc/ntp/step-tickers
# List of NTP servers used by the ntpdate service.

#0.centos.pool.ntp.org
commonbase.example.com

5.1 ntp服务端配置

这里将commonbase机器作为ntp的服务端

[root@commonbase etc]# ip route show
default via 192.168.1.1 dev eth0 proto dhcp metric 100 
169.254.169.254 via 192.168.1.246 dev eth0 proto dhcp metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.200 metric 100

[root@commonbase etc]# cat /etc/ntp.conf
driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
pidfile   /var/run/ntpd.pid
leapfile  /etc/ntp.leapseconds
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
#允许任何IP的客户端进行时间同步,但不允许修改NTP服务端参数,default类似于0.0.0.0
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
#restrict 10.135.3.58 nomodify notrap nopeer noquery
#允许通过本地回环接口进行所有访问
restrict 127.0.0.1
restrict  -6 ::1
# 允许内网其他机器同步时间。网关和子网掩码。注意有些集群的网关可能比较特殊,可以用下面的命令获取这部分的信息
# 查看网关信息:/etc/sysconfig/network-scripts/ifcfg-网卡名;route -n、ip route show  
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# 允许上层时间服务器主动修改本机时间
#server asia.pool.ntp.org minpoll 4 maxpoll 4 prefer
## 外部时间服务器不可用时,以本地时间作为时间服务
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10
[root@commonbase etc]# systemctl restart ntpd
[root@commonbase etc]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@commonbase etc]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2020-08-28 23:57:36 CST; 1min 41s ago
 Main PID: 19335 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─19335 /usr/sbin/ntpd -u ntp:ntp -g

8月 28 23:57:36 commonbase.example.com systemd[1]: Starting Network Time Service...
8月 28 23:57:36 commonbase.example.com ntpd[19335]: proto: precision = 0.045 usec
8月 28 23:57:36 commonbase.example.com ntpd[19335]: 0.0.0.0 c01d 0d kern kernel time sync enabled
8月 28 23:57:36 commonbase.example.com systemd[1]: Started Network Time Service.

5.2 ntp客户端配置

[root@cdhmaster1 etc]# ntpstat
synchronised to NTP server (192.168.1.200) at stratum 12
   time correct to within 979 ms
   polling server every 64 s
[root@cdhmaster1 etc]# cat /etc/ntp.conf
driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
pidfile   /var/run/ntpd.pid
leapfile  /etc/ntp.leapseconds
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server 192.168.1.200 iburst

[root@cdhmaster1 etc]# systemctl start ntpd
[root@cdhmaster1 etc]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since 六 2020-08-29 00:31:08 CST; 2s ago
  Process: 1743 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1744 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─1744 /usr/sbin/ntpd -u ntp:ntp -g

8月 29 00:31:08 cdhmaster1.example.com systemd[1]: Starting Network Time Service...
8月 29 00:31:08 cdhmaster1.example.com ntpd[1744]: proto: precision = 0.033 usec
8月 29 00:31:08 cdhmaster1.example.com systemd[1]: Started Network Time Service.
8月 29 00:31:08 cdhmaster1.example.com ntpd[1744]: 0.0.0.0 c01d 0d kern kernel time sync enabled
[root@cdhmaster1 etc]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@cdhmaster1 etc]# ntpstat
unsynchronised
  time server re-starting
   polling server every 8 s
-> 这里需要注意服务端端口开放情况
[root@cdhmaster1 etc]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 commonbase.exam LOCAL(0)        11 u    1   64    1    0.316   30.167   0.043
[root@cdhmaster1 etc]# ntpstat
synchronised to NTP server (192.168.1.200) at stratum 12
   time correct to within 979 ms
   polling server every 64 s
[root@cdhmaster1 etc]# scp ntp.conf root@cdhmaster2:/etc/ntp.conf
ntp.conf                                                                                                100%  333   512.5KB/s   00:00  
[root@cdhmaster1 etc]# scp ntp.conf root@cdhnode1:/etc/ntp.conf
ntp.conf                                                                                                100%  333   461.9KB/s   00:00  
[root@cdhmaster1 etc]# scp ntp.conf root@cdhnode2:/etc/ntp.conf
ntp.conf                                                                                                100%  333   525.6KB/s   00:00  
[root@cdhmaster1 etc]# scp ntp.conf root@cdhnode3:/etc/ntp.conf
ntp.conf

6 安装mysql

我这里直接在commonbase上面docker启动的mysql,可以参考我之前的docker教程。

[root@commonbase shell]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                     NAMES
5d08e9cd8102        mysql:5.7.19        "docker-entrypoint.s…"   14 seconds ago      Up 12 seconds       0.0.0.0:39106->3306/tcp   cdh_mysql_5719

二 资源准备

1 准备apache服务器(非必须)

在commonbase上面准备一个apache服务器,把文件挂载到本地文件夹

docker run -p 80:80 --name apache2 -v /etc/localtime:/etc/localtime -v /data/apache2/data/htdocs:/usr/local/apache2/htdocs --restart=always -d httpd

[root@commonbase shell]# docker ps
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                           NAMES
3cab3dd3cfa8        mysql:5.7.19                    "docker-entrypoint.s…"   7 hours ago         Up 7 hours          0.0.0.0:39106->3306/tcp         cdh-mysql
9b8a6981d02e        redis                           "docker-entrypoint.s…"   7 hours ago         Up 7 hours          0.0.0.0:6379->6379/tcp          redis-server
77dcec912062        httpd                           "httpd-foreground"       8 hours ago         Up 8 hours          0.0.0.0:80->80/tcp              apache2

2 准备资源

在apache的htdocs目录下创建 cloudera-repos目录

2.1 下载parcel

https://archive.cloudera.com/cdh6/6.3.2/parcels/

wget https://archive.cloudera.com/cdh6/6.3.2/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554-el7.parcel
wget https://archive.cloudera.com/cdh6/6.3.2/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554-el7.parcel.sha1
wget https://archive.cloudera.com/cdh6/6.3.2/parcels/manifest.json

2.2 下载rpms

https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/

wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/cloudera-manager-agent-6.3.1-1466458.el7.x86_64.rpm
wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/cloudera-manager-daemons-6.3.1-1466458.el7.x86_64.rpm
wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/cloudera-manager-server-6.3.1-1466458.el7.x86_64.rpm
wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/cloudera-manager-server-db-2-6.3.1-1466458.el7.x86_64.rpm
wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/enterprise-debuginfo-6.3.1-1466458.el7.x86_64.rpm
wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPMS/x86_64/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

2.3 获取其他资源

wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/RPM-GPG-KEY-cloudera
wget https://archive.cloudera.com/cm6/6.3.1/redhat7/yum/cloudera-manager.repo
wget https://archive.cloudera.com/cm6/6.3.1/allkeys.asc

为了加速安装可以将以上文件按照对应目录放到apache的 cloudera-repos目录下面

2.4 初始化repodata(非必须)

如果没有安装 createrepo,请 yum 安装 createrepo

yum -y install createrepo
cd /data/apache2/data/htdocs/cloudera-repos/cm6/6.3.1/redhat7/yum/
createrepo .
[root@commonbase 6.3.1]# cd redhat7/yum/
[root@commonbase yum]# ls
cloudera-manager.repo  RPM-GPG-KEY-cloudera  RPMS
[root@commonbase yum]# createrepo .
Spawning worker 0 with 2 pkgs
Spawning worker 1 with 2 pkgs
Spawning worker 2 with 1 pkgs
Spawning worker 3 with 1 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

*3 完整镜像版下载

3.1 下载 parcel files

cd /var/www/html/cloudera-repos
sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/cdh6/6.3.2/parcels/ -P /data/apache2/data/htdocs/cloudera-repos
sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/gplextras6/6.3.2/parcels/ -P /data/apache2/data/htdocs/cloudera-repos
sudo chmod -R ugo+rX /data/apache2/data/htdocs/cloudera-repos/cdh6
sudo chmod -R ugo+rX /data/apache2/data/htdocs/cloudera-repos/gplextras6

3.2 下载 Cloudera Manager

sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/cm6/6.3.1/redhat7/ -P /data/apache2/data/htdocs/cloudera-repos
sudo wget https://archive.cloudera.com/cm6/6.3.1/allkeys.asc -P /data/apache2/data/htdocs/cloudera-repos/cm6/6.3.1/
sudo chmod -R ugo+rX /data/apache2/data/htdocs/cloudera-repos/cm6

4 下载数据库驱动

这里保存元数据的数据库选用 Mysql,因此需要下载Mysql数据库驱动,如果选用的其他数据,请详细阅读安装和配置数据库

并将下载的驱动压缩包解压,或获得 mysql-connector-java-5.1.46-bin.jar,记得务必将其名字改为 mysql-connector-java.jar

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.tar.gz
# 解压
tar zxvf mysql-connector-java-5.1.46.tar.gz
# 重命名mysql-connector-java-5.1.46-bin.jar为mysql-connector-java.jar,并放到/usr/share/java/下
mv mysql-connector-java-5.1.46-bin.jar /usr/share/java/mysql-connector-java.jar
# 同时发送到其它节点
[root@commonbase mysql-connector-java-5.1.46]# scp mysql-connector-java.jar root@cdhmaster1:/usr/share/java/mysql-connector-java.jar
mysql-connector-java.jar                                                                                                                               100%  981KB   2.2MB/s   00:00  
[root@commonbase mysql-connector-java-5.1.46]# scp mysql-connector-java.jar root@cdhmaster2:/usr/share/java/mysql-connector-java.jar
mysql-connector-java.jar                                                                                                                               100%  981KB   2.1MB/s   00:00  
[root@commonbase mysql-connector-java-5.1.46]# scp mysql-connector-java.jar root@cdhnode1:/usr/share/java/mysql-connector-java.jar
mysql-connector-java.jar                                                                                                                               100%  981KB   9.6MB/s   00:00  
[root@commonbase mysql-connector-java-5.1.46]# scp mysql-connector-java.jar root@cdhnode2:/usr/share/java/mysql-connector-java.jar
mysql-connector-java.jar                                                                                                                               100%  981KB   4.0MB/s   00:00  
[root@commonbase mysql-connector-java-5.1.46]# scp mysql-connector-java.jar root@cdhnode3:/usr/share/java/mysql-connector-java.jar
mysql-connector-java.jar 
宝剑锋从磨砺出,梅花香自苦寒来.