Press "Enter" to skip to content

Mongodb分片+副本集搭建以及设置用户认证

端口分布如下:

MongoDB安装步骤略,直接解压就行,具体安装路径为/home/app,mongodb版本为mongodb-linux-x86_64-rhel62-3.2.6。

配置用户密码之前请务必先看下这篇文章https://www.tracymc.cn/archives/2214,以免多走不必要的路,切记切记!!!!!

具体安装步骤如下:

三台都要:

//配置文件存放路径
mkdir -p /home/app/mongodb/config
//config server相关路径
mkdir -p /home/app/mongodb/config/data
mkdir -p /home/app/mongodb/config/logs&&touch /home/app/mongodb/config/logs/config.log
//mongos相关路径
mkdir -p /home/app/mongodb/mongos/logs&&touch /home/app/mongodb/mongos/logs/mongos.log
192.168.139.142
mkdir -p /home/app/mongodb/conf
mkdir -p /home/app/mongodb/rs1-master
mkdir -p /home/app/mongodb/rs1-master/data
mkdir -p /home/app/mongodb/rs1-master/logs
mkdir -p /home/app/mongodb/rs2-arbiter
mkdir -p /home/app/mongodb/rs2-arbiter/data
mkdir -p /home/app/mongodb/rs2-arbiter/logs
mkdir -p /home/app/mongodb/rs3-slaver
mkdir -p /home/app/mongodb/rs3-slaver/data
mkdir -p /home/app/mongodb/rs3-slaver/logs
192.168.139.149
mkdir -p /home/app/mongodb/conf
mkdir -p /home/app/mongodb/rs1-slaver
mkdir -p /home/app/mongodb/rs1-slaver/data
mkdir -p /home/app/mongodb/rs1-slaver/logs
mkdir -p /home/app/mongodb/rs2-master
mkdir -p /home/app/mongodb/rs2-master/data
mkdir -p /home/app/mongodb/rs2-master/logs
mkdir -p /home/app/mongodb/rs3-arbiter
mkdir -p /home/app/mongodb/rs3-arbiter/data
mkdir -p /home/app/mongodb/rs3-arbiter/logs
192.168.139.148
mkdir -p /home/app/mongodb/conf
mkdir -p /home/app/mongodb/rs1-arbiter
mkdir -p /home/app/mongodb/rs1-arbiter/data
mkdir -p /home/app/mongodb/rs1-arbiter/logs
mkdir -p /home/app/mongodb/rs2-slaver
mkdir -p /home/app/mongodb/rs2-slaver/data
mkdir -p /home/app/mongodb/rs2-slaver/logs
mkdir -p /home/app/mongodb/rs3-master
mkdir -p /home/app/mongodb/rs3-master/data
mkdir -p /home/app/mongodb/rs3-master/logs

config server配置服务器(三台都要)

mongodb3.4以后要求配置服务器也创建副本集,不然集群搭建不成功
启动config server:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod --fork --configsvr --replSet configReplSet --dbpath /home/app/mongodb/config/data --logpath /home/app/mongodb/config/logs/config.log --port 20000

上述步骤在3台机器执行完后到任意一台机器上执行如下命令:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --port 20000   //进入mongo shell
config = {
_id: "configReplSet",
configsvr: true,
version: 1,
members: [
{ _id : 0, host : "192.168.139.142:20000" },
{ _id : 1, host : "192.168.139.149:20000" },
{ _id : 2, host : "192.168.139.148:20000" }
]
}
rs.initiate(config);

mongos(三台机器都要)

启动mongos

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongos --configdb 192.168.139.142:20000,192.168.139.149:20000,192.168.139.148:20000 --fork --logpath /home/app/mongodb/mongos/logs/mongos.log --port 30000

配置分片副本集(三台机器),具体如下:

第一个副本集:

rs1-master(192.168.139.142)
vi /home/app/mongodb/conf/rs1-master.conf
dbpath=/home/app/mongodb/rs1-master/data
logpath=/home/app/mongodb/rs1-master/logs/rs1-master.log
pidfilepath=/home/app/mongodb/rs1-master/rs1-master.pid
directoryperdb=true
logappend=true
replSet=rs1
bind_ip=192.168.139.142
port=10001
oplogSize=10000
fork=true
noprealloc=true

启动rs1-master:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs1-master.conf
rs1-slaver(192.168.139.149)
vi /home/app/mongodb/conf/rs1-slaver.conf
dbpath=/home/app/mongodb/rs1-slaver/data
logpath=/home/app/mongodb/rs1-slaver/logs/rs1-slaver.log
pidfilepath=/home/app/mongodb/rs1-slaver/rs1-slaver.pid
directoryperdb=true
logappend=true
replSet=rs1
bind_ip=192.168.139.149
port=10001
oplogSize=10000
fork=true
noprealloc=true

启动rs1-slaver:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs1-slaver.conf
rs1-arbiter(192.168.139.148)
vi /home/app/mongodb/conf/rs1-arbiter.conf
dbpath=/home/app/mongodb/rs1-arbiter/data
logpath=/home/app/mongodb/rs1-arbiter/logs/rs1-arbiter.log
pidfilepath=/home/app/mongodb/rs1-arbiter/rs1-arbiter.pid
directoryperdb=true
logappend=true
replSet=rs1
bind_ip=192.168.139.148
port=10001
oplogSize=10000
fork=true
noprealloc=true

启动rs1-arbiter:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs1-arbiter.conf

上述步骤在3台机器执行完后到192.168.139.142上执行如下命令:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 10001(192.168.139.142上面执行,即rs1-master那台机器)
config = {
_id: "rs1",
members: [
{ _id : 0, host : "192.168.139.142:10001" },
{ _id : 1, host : "192.168.139.149:10001" },
{ _id : 2, host : "192.168.139.148:10001" , arbiterOnly: true }
]
}
rs.initiate(config);

第二个副本集:

rs2-arbiter(192.168.139.142)
vi /home/app/mongodb/conf/rs2-arbiter.conf
dbpath=/home/app/mongodb/rs2-arbiter/data
logpath=/home/app/mongodb/rs2-arbiter/logs/rs2-arbiter.log
pidfilepath=/home/app/mongodb/rs2-arbiter/rs2-arbiter.pid
directoryperdb=true
logappend=true
replSet=rs2
bind_ip=192.168.139.142
port=10002
oplogSize=10000
fork=true
noprealloc=true

启动rs2-arbiter:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs2-arbiter.conf
rs2-master(192.168.139.149)
vi /home/app/mongodb/conf/rs2-master.conf
dbpath=/home/app/mongodb/rs2-master/data
logpath=/home/app/mongodb/rs2-master/logs/rs2-master.log
pidfilepath=/home/app/mongodb/rs2-master/rs2-master.pid
directoryperdb=true
logappend=true
replSet=rs2
bind_ip=192.168.139.149
port=10002
oplogSize=10000
fork=true
noprealloc=true

启动rs2-master:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs2-master.conf
rs2-slaver(192.168.139.148)
vi /home/app/mongodb/conf/rs2-slaver.conf
dbpath=/home/app/mongodb/rs2-slaver/data
logpath=/home/app/mongodb/rs2-slaver/logs/rs2-slaver.log
pidfilepath=/home/app/mongodb/rs2-slaver/rs2-slaver.pid
directoryperdb=true
logappend=true
replSet=rs2
bind_ip=192.168.139.148
port=10002
oplogSize=10000
fork=true
noprealloc=true

启动rs2-slaver:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs2-slaver.conf

上述步骤在3台机器执行完后到192.168.139.149上执行如下命令:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.149 --port 10002(192.168.139.149上面执行)
config = {
_id: "rs2",
members: [
{ _id : 0, host : "192.168.139.142:10002" , arbiterOnly: true },
{ _id : 1, host : "192.168.139.149:10002" },
{ _id : 2, host : "192.168.139.148:10002" }
]
}
rs.initiate(config);

第三个副本集:

rs3-slaver(192.168.139.142)
vi /home/app/mongodb/conf/rs3-slaver.conf
dbpath=/home/app/mongodb/rs3-slaver/data
logpath=/home/app/mongodb/rs3-slaver/logs/rs3-slaver.log
pidfilepath=/home/app/mongodb/rs3-slaver/rs3-slaver.pid
directoryperdb=true
logappend=true
replSet=rs3
bind_ip=192.168.139.142
port=10003
oplogSize=10000
fork=true
noprealloc=true

启动rs3-slaver:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs3-slaver.conf
rs3-arbiter(192.168.139.149)
vi /home/app/mongodb/conf/rs3-arbiter.conf
dbpath=/home/app/mongodb/rs3-arbiter/data
logpath=/home/app/mongodb/rs3-arbiter/logs/rs3-arbiter.log
pidfilepath=/home/app/mongodb/rs3-arbiter/rs3-arbiter.pid
directoryperdb=true
logappend=true
replSet=rs3
bind_ip=192.168.139.149
port=10003
oplogSize=10000
fork=true
noprealloc=true

启动rs3-arbiter:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs3-arbiter.conf
rs3-master(192.168.139.148)
vi /home/app/mongodb/conf/rs3-master.conf
dbpath=/home/app/mongodb/rs3-master/data
logpath=/home/app/mongodb/rs3-master/logs/rs3-master.log
pidfilepath=/home/app/mongodb/rs3-master/rs3-master.pid
directoryperdb=true
logappend=true
replSet=rs3
bind_ip=192.168.139.148
port=10003
oplogSize=10000
fork=true
noprealloc=true

启动rs3-master:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs3-master.conf

上述步骤在3台机器执行完后到192.168.139.148上执行如下命令:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.148 --port 10003(192.168.139.148上面执行)
config = {
_id: "rs3",
members: [
{ _id : 0, host : "192.168.139.142:10003" },
{ _id : 1, host : "192.168.139.149:10003" , arbiterOnly: true },
{ _id : 2, host : "192.168.139.148:10003" }
]
}
rs.initiate(config);

启用分片

目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效.

登陆任意一台mongos

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000

串联路由服务器与分配副本集

sh.addShard("rs1/192.168.139.142:10001,192.168.139.149:10001,192.168.139.148:10001")
sh.addShard("rs2/192.168.139.142:10002,192.168.139.149:10002,192.168.139.148:10002")
sh.addShard("rs3/192.168.139.142:10003,192.168.139.149:10003,192.168.139.148:10003")

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。

指定testdb分片生效

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000
mongos> use admin;
switched to db admin
mongos> db.runCommand( { enablesharding :"testdb"});
{ "ok" : 1 }

指定数据库里需要分片的集合和片键

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000
mongos> use testdb;
switched to db testdb
mongos> sh.shardCollection("testdb.testcol", { "id" : 1 } )
{ "collectionsharded" : "testdb.testcol", "ok" : 1 }

上面步骤的意思是设置testdb的testcol集合需要分片,根据 id 自动分片到 rs1,rs2,rs3上面去.
测试分片配置结果

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000
use testdb;  //使用testdb
for(i=1;i<=500051;i++) db.testcol.insert({id:i,name:"test",age:11})  //插入数据
db.testcol.stats();  //查看集合情况

可以看到数据分到3个分片,各自分片数量为:rs1 "count" : 220494,rs2 "count" : 279524,rs3 "count" : 33.已经成功了!不过分的好像不是很均匀,所以这个分片还是很有讲究的.

开启用户验证

切换数据库到admin创建root用户.root(只能在admin数据库使用)角色不仅可以授权,而且也可以对集合进行任意操作.

1.mongos新建认证用户:

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"mongosroot","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "mongosroot", "roles" : [ "root" ] }

再切换到testdb数据库创建读写权限的用户:

mongos> use testdb
switched to db testdb
mongos> db.createUser({"user":"mongosuser","pwd":"123456","roles":[{"db":"testdb","role":"dbOwner"}]})
Successfully added user: {
        "user" : "mongosuser",
        "roles" : [
                {
                        "db" : "testdb",
                        "role" : "dbOwner"
                }
        ]
}

2.config新建认证用户:

注:mongos新建的用户已经能直接连接config了,因此config无需单独新建用户了。如果实在想单独新建另外一个用户用于区分的话,命令格式如下:

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 20000 //连接到config1
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"configroot1","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "configroot1", "roles" : [ "root" ] }
[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.149 --port 20000 //连接到config2
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"configroot2","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "configroot2", "roles" : [ "root" ] }
[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.148 --port 20000 //连接到config3
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"configroot3","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "configroot3", "roles" : [ "root" ] }

这里需要特别注意的是,config新建认证用户的话,在哪台机器上新建的用户只能用这个用户连接这台机器,不能连接其它的config机器,不像mongos那样只需要新建一次,有多少个config进程就需要建多少次config用户,比如我这里,192.168.139.142/192.168.139.149/192.168.139.148分别对应着config1/config2/config3,因此我这里需要分别连接config1/config2/config3新建用户。

3.rs1新建认证用户

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 10001 //连接到rs1的PRIMARY
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"rs1root","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "rs1root", "roles" : [ "root" ] }

再切换到testdb数据库创建读写权限的用户:

mongos> use testdb
switched to db testdb
mongos> db.createUser({"user":"rs1user","pwd":"123456","roles":[{"db":"testdb","role":"dbOwner"}]})
Successfully added user: {
        "user" : "rs1user",
        "roles" : [
                {
                        "db" : "testdb",
                        "role" : "dbOwner"
                }
        ]
}

4.rs2新建认证用户

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.149 --port 10002 //连接到rs2的PRIMARY
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"rs2root","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "rs2root", "roles" : [ "root" ] }

再切换到testdb数据库创建读写权限的用户:

mongos> use testdb
switched to db testdb
mongos> db.createUser({"user":"rs2user","pwd":"123456","roles":[{"db":"testdb","role":"dbOwner"}]})
Successfully added user: {
        "user" : "rs2user",
        "roles" : [
                {
                        "db" : "testdb",
                        "role" : "dbOwner"
                }
        ]
}

5.rs3新建认证用户

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.148 --port 10003 //连接到rs3的PRIMARY
mongos> use admin
switched to db admin
mongos> db.createUser({"user":"rs3root","pwd":"123456","roles":["root"]})
Successfully added user: { "user" : "rs3root", "roles" : [ "root" ] }

再切换到testdb数据库创建读写权限的用户:

mongos> use testdb
switched to db testdb
mongos> db.createUser({"user":"rs3user","pwd":"123456","roles":[{"db":"testdb","role":"dbOwner"}]})
Successfully added user: {
        "user" : "rs3user",
        "roles" : [
                {
                        "db" : "testdb",
                        "role" : "dbOwner"
                }
        ]
}

6.退出mongo shell,创建keyFile(文件名可自取)文件,并赋予600权限,必须要600权限:

[root@zabbix ~]# cd /home/app/mongodb
[root@zabbix mongodb]# openssl rand -base64 753 > keyFile
[root@zabbix mongodb]# chmod 600 keyFile

7.把生成的keyFile文件拷贝到其他几个mongodb的副本集和config中:

这里统一192.168.139.142/192.168.139.149/192.168.139.148的keyFile都放在/home/app/mongodb目录下.

[root@zabbix mongodb]# scp -rp keyFile root@192.168.139.149:/home/app/mongodb
[root@zabbix mongodb]# scp -rp keyFile root@192.168.139.148:/home/app/mongodb

8.修改对应启动配置文件,其他都加上如下内容(开启用户验证和keyFile验证):

a)192.168.139.142:

将config的启动方式改为如下,增加了auth和keyFile选项:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod --fork --configsvr --replSet configReplSet --dbpath /home/app/mongodb/config/data --logpath /home/app/mongodb/config/logs/config.log --port 20000 --auth --keyFile /home/app/mongodb/keyFile

将mongos的启动方式改为如下,增加了keyFile选项:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongos --configdb 192.168.139.142:20000,192.168.139.149:20000,192.168.139.148:20000 --fork --logpath /home/app/mongodb/mongos/logs/mongos.log --port 30000 --keyFile /home/app/mongodb/keyFile

分别修改rs1-master.conf、rs2-arbiter.conf、rs3-slaver.conf,增加如下内容:

auth=true
keyFile=/home/app/mongodb/keyFile

b)192.168.139.149:

将config的启动方式改为如下,增加了auth和keyFile选项:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod --fork --configsvr --replSet configReplSet --dbpath /home/app/mongodb/config/data --logpath /home/app/mongodb/config/logs/config.log --port 20000 --auth --keyFile /home/app/mongodb/keyFile

将mongos的启动方式改为如下,增加了keyFile选项:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongos --configdb 192.168.139.142:20000,192.168.139.149:20000,192.168.139.148:20000 --fork --logpath /home/app/mongodb/mongos/logs/mongos.log --port 30000 --keyFile /home/app/mongodb/keyFile

分别修改rs1-slaver.conf、rs2-master.conf、rs3-arbiter.conf,增加如下内容:

auth=true
keyFile=/home/app/mongodb/keyFile

c)192.168.139.148:

将config的启动方式改为如下,增加了auth和keyFile选项:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod --fork --configsvr --replSet configReplSet --dbpath /home/app/mongodb/config/data --logpath /home/app/mongodb/config/logs/config.log --port 20000 --auth --keyFile /home/app/mongodb/keyFile

将mongos的启动方式改为如下,增加了keyFile选项:

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongos --configdb 192.168.139.142:20000,192.168.139.149:20000,192.168.139.148:20000 --fork --logpath /home/app/mongodb/mongos/logs/mongos.log --port 30000 --keyFile /home/app/mongodb/keyFile

分别修改rs1-arbiter.conf、rs2-slaver.conf、rs3-master.conf,增加如下内容:

auth=true
keyFile=/home/app/mongodb/keyFile

9.重新依次启动config、mongos和rs1/rs2/rs3

a)启动config server(三台都要):

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod --fork --configsvr --replSet configReplSet --dbpath /home/app/mongodb/config/data --logpath /home/app/mongodb/config/logs/config.log --port 20000 --auth --keyFile /home/app/mongodb/keyFile

b)启动mongos(三台都要):

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongos --configdb 192.168.139.142:20000,192.168.139.149:20000,192.168.139.148:20000 --fork --logpath /home/app/mongodb/mongos/logs/mongos.log --port 30000 --keyFile /home/app/mongodb/keyFile

c)启动副本集:

192.168.139.142启动rs1-master.conf、rs2-arbiter.conf、rs3-slaver.conf:

[root@zabbix conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs1-master.conf 
[root@zabbix conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs2-arbiter.conf 
[root@zabbix conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs3-slaver.conf 

192.168.139.149启动rs1-slaver.conf、rs2-master.conf、rs3-arbiter.conf:

[root@zabbix test01]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs1-slaver.conf 
[root@zabbix test01]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs2-master.conf 
[root@zabbix test01]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs3-arbiter.conf 

192.168.139.148启动rs1-arbiter.conf、rs2-slaver.conf、rs3-master.conf:

[root@zabbix test02]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs1-arbiter.conf 
[root@zabbix test02]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs2-slaver.conf 
[root@zabbix test02]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongod -f /home/app/mongodb/conf/rs3-master.conf 

验证:

[root@zabbix conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000
mongos> show databases;
2018-08-22T11:50:26.577+0800 E QUERY    [thread1] Error: listDatabases failed:{
        "ok" : 0,
        "errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
        "code" : 13
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1
shellHelper.show@src/mongo/shell/utils.js:760:19
shellHelper@src/mongo/shell/utils.js:650:15
@(shellhelp2):1:1
mongos> use admin
switched to db admin
mongos> db.auth("mongosroot","123456") 
1
mongos> show databases;
admin   0.000GB
config  0.001GB
testdb  0.018GB
mongos> for(i=600000;i<=1000000;i++) db.testcol.insert({id:i,name:"test",age:11})

此时各自分片数量为:rs1 "count" : 482977/rs2 "279525" : 279524/rs3 "count" : 137552,与之前的rs1 "count" : 220494/rs2 "count" : 279524/rs3 "count" : 33均有增长,说明三个副本集都有了新数据.

备注:
如果用mongosroot用户连接mongos,只能先连接到admin数据库再认证,再切换到testdb数据库而且不用再用mongosuser用户重新认证,不能直接连testdb数据库,root用户下能看到admin/config/testdb三个库;如果用mongosuser用户连接mongos,只能直接连接到testdb数据库再进行认证,不能直接连admin数据库,mongosuser用户下只能看到testdb这一个库.

启用认证后新建集合:

此时只能用root角色用户,不然会没权限.

/home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.142 --port 30000
mongos> use admin
switched to db admin
mongos> db.auth('mongosroot','123456')
1
mongos> use testdb
switched to db testdb
mongos> sh.shardCollection("testdb.testcol01", { "id" : 1 } )
{ "collectionsharded" : "testdb.testcol01", "ok" : 1 }
mongos> for(i=1;i<=500000;i++) db.testcol01.insert({id:i,name:"test",age:11}) 

此时testcol01数据分布为:rs1 "count" : 205634/rs2 "count" : 294350/rs3 "count" : 16.

常见命令:

1.mongostat

[root@test01 conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongostat  --host=192.168.139.142 --port=10001 --username rs1root  --password 123456 --authenticationDatabase admin               
insert query update delete getmore command % dirty % used flushes vsize    res qr|qw ar|aw netIn netOut conn set repl                      time
    *0    *0     *0     *0       0     5|0     0.0   32.2       1  1.2G 426.0M   0|0   0|0  610b    38k   33 rs1  PRI 2018-08-23T13:38:44+08:00
    *0    *0     *0     *0       0     5|0     0.0   32.2       0  1.2G 426.0M   0|0   0|0  610b    38k   33 rs1  PRI 2018-08-23T13:38:45+08:00
    *0    *0     *0     *0       1     2|0     0.0   32.2       0  1.2G 426.0M   0|0   0|0  730b    19k   33 rs1  PRI 2018-08-23T13:38:46+08:00
参数说明:
--host:指定IP地址和端口,也可以只写IP,然后使用--port参数指定端口号;
--username:如果开启了认证,则需要在其后填写用户名;
--password:密码
--authenticationDatabase:若开启了认证,则需要在此参数后填写认证库(注意是认证上述账号的数据库)

各字段解释说明:
insert 一秒内的插入数,如果有*,表示这是复制操作
query 一秒内的查询数
update 一秒内的更新数
delete 一秒内的删除数
getmore 一秒内的执行getmore的次数.10条简单的查询可能比一条复杂的查询速度还快,,所以数值的大小,意义并不大.但至少可以知道,现在是否在处理查询,是否在插入.
command 一秒内执行的命令数.比如批量插入,只认为是一条命令,意义不大.如果是slave,会显示两个值,local|replicated,通过这两个数值的比较,或许可以看出点问题.
flushes 一秒内flush的次数(每秒执行fsync将数据写入硬盘的次数),一般都是0,或者1,通过计算两个1之间的间隔时间,可以大致了解多长时间flush一次.flush开销是很大的,如果频繁的flush,可能就要找找原因了.
mapped 映射到内存的数据大小
vsize 占用的虚拟内存大小
res 占用的物理内存大小
faults 每秒访问失败数(只有Linux有),数据被交换出物理内存,放到swap.不要超过100,否则就是机器内存太小,造成频繁swap写入.此时要升级内存或者扩展.
qr|qw 等待读写的队列长度
ar|aw 正在读写的数量
netIn MongoDB实例接收到的网络流量,用字节bytes表示
netOut MongoDB实例发送出去的网络流量,用字节bytes表示
conn 打开的连接数总数
set replica set的名称
repl replica set的状态,PRI 表示是Primary,SEC表示是Secondary
time 当前时间
inserts - # of inserts per second (* means replicated op)
query - # of queries per second
update - # of updates per second
delete - # of deletes per second
getmore - # of get mores (cursor batch) per second
command - # of commands per second, on a slave its local|replicated
flushes - # of fsync flushes per second
mapped - amount of data mmaped (total data size) megabytes
vsize - virtual size of process in megabytes
res - resident size of process in megabytes
faults - # of pages faults per sec
locked - name of and percent time for most locked database
idx miss - percent of btree page misses (sampled)
qr|qw - queue lengths for clients waiting (read|write)
ar|aw - active clients (read|write)
netIn - network traffic in - bits
netOut - network traffic out - bits
conn - number of open connections
set - replica set name
repl - replication type
PRI - primary (master)
SEC - secondary
REC - recovering
UNK - unknown
SLV - slave
RTR - mongos process ("router")

2.mongodump

[root@test01 conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongodump --port 10001 -h 192.168.139.142 --username rs1root --password 123456 -d testdb -c testcol01 -o /home/app/rs1_testcol01_`date -d "1 day ago" +%Y%m%d` --authenticationDatabase admin
2018-08-23T13:45:13.257+0800    writing testdb.testcol01 to 
2018-08-23T13:45:14.530+0800    done dumping testdb.testcol01 (205635 documents)

或者

[root@test01 conf]#  /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongodump --port 10001 -h 192.168.139.142 --username rs1user --password 123456 -d testdb -c testcol01 -o /home/app/rs1_testcol01_`date -d "1 day ago" +%Y%m%d`
2018-08-23T13:47:37.435+0800    writing testdb.testcol01 to 
2018-08-23T13:47:38.698+0800    done dumping testdb.testcol01 (205635 documents)

3.mongorestore

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongorestore --username  rs3root --password 123456  --host 192.168.139.142 --port 10003 --authenticationDatabase admin --db testdb  /home/app/rs1_testcol01_20180822/testdb/testcol01.bson --collection testcol
2018-08-23T14:17:00.817+0800    checking for collection data in /home/app/rs1_testcol01_20180822/testdb/testcol01.bson
2018-08-23T14:17:00.818+0800    reading metadata for testdb.testcol from /home/app/rs1_testcol01_20180822/testdb/testcol01.metadata.json
2018-08-23T14:17:00.818+0800    restoring testdb.testcol from /home/app/rs1_testcol01_20180822/testdb/testcol01.bson
2018-08-23T14:17:03.821+0800    [###################.....]  testdb.testcol  10.1 MB/12.2 MB  (82.7%)
2018-08-23T14:17:05.007+0800    [########################]  testdb.testcol  12.2 MB/12.2 MB  (100.0%)
2018-08-23T14:17:05.007+0800    restoring indexes for collection testdb.testcol from metadata
2018-08-23T14:17:05.007+0800    finished restoring testdb.testcol (205635 documents)
2018-08-23T14:17:05.007+0800    done

或者

[root@zabbix ~]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongorestore --username  rs3user --password 123456  --host 192.168.139.142 --port 10003  --db testdb  /home/app/rs1_testcol01_20180822/testdb/testcol01.bson --collection testcol
2018-08-23T14:18:52.265+0800    checking for collection data in /home/app/rs1_testcol01_20180822/testdb/testcol01.bson
2018-08-23T14:18:52.266+0800    reading metadata for testdb.testcol from /home/app/rs1_testcol01_20180822/testdb/testcol01.metadata.json
2018-08-23T14:18:52.267+0800    restoring testdb.testcol from /home/app/rs1_testcol01_20180822/testdb/testcol01.bson
2018-08-23T14:18:55.271+0800    [##################......]  testdb.testcol  9.5 MB/12.2 MB  (77.8%)
2018-08-23T14:18:56.852+0800    [########################]  testdb.testcol  12.2 MB/12.2 MB  (100.0%)
2018-08-23T14:18:56.852+0800    restoring indexes for collection testdb.testcol from metadata
2018-08-23T14:18:56.852+0800    finished restoring testdb.testcol (205635 documents)
2018-08-23T14:18:56.852+0800    done

回退方案:

将副本集配置文件中的如下两行注释:
auth=true
keyFile=/home/app/mongodb/keyFile

再分别重启副本集,再将config server的启动参数--auth --keyFile /home/app/mongodb/keyFile去掉重启config服务,再将mongos的--keyFile /home/app/mongodb/keyFile启动参数去掉重启mongos服务即可,验证情况如下:

[root@zabbix conf]# /home/app/mongodb-linux-x86_64-rhel62-3.2.6/bin/mongo --host 192.168.139.148 --port 30000
mongos> show databases;
admin   0.000GB
config  0.001GB
testdb  0.057GB
mongos> use testdb;
switched to db testdb
mongos> db.testcol.count()
900054
mongos> use admin
switched to db admin
mongos> show collections
system.users
system.version

Be First to Comment

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注