正好业务需求,写一篇配置文档
服务器环境:
192.168.105.41
192.168.101.115
192.168.101.116
由于资源限制,这里就用3台服务器搭建环境,有条件,可以每个端口分配一台服务器
架构设计:
MongoDB Replica set + sharding
192.168.105.41
shard1_1:5281
shard2_1:5282
config1:5888
mongos1:6000
192.168.101.115
shard1_2:5281
shard2_2:5282
config2:5888
mongos2:6000
192.168.101.116
shard1_3:5281
shard2_3:5282
config3:5888
mongos3:6000
启动shard1节点:
分别在三台执行以下命令
#chown -R mongo:mongo /data01/mongo_shard1.conf
#mkdir -p /data01/shard/shard1
#chown -R mongo:mongo /data01/shard/shard1
#cat /data01/mongo_shard1.conf
dbpath = /data01/shard/shard1
port = 5281
logpath = /data01/shard/shard1/mongodb_5281.log
logappend = true
fork = true
noauth = true
nounixsocket = true
directoryperdb = true
journal = true
journalCommitInterval = 40
profile = 0
nohttpinterface = true
nssize = 1000
oplogSize = 1024
storageEngine = wiredTiger
wiredTigerJournalCompressor = zlib
切换到mongo用户执行
#su - mongo
$numactl --interleave=all mongod --replSet shard1 -f /data01/mongo_shard1.conf
启动好后,连接任何一个sharding节点,初始化replica set
mongo 127.0.0.1:5281/admin
config = {_id: 'shard1', members: [
{_id: 0, host: '192.168.101.115:5281'},
{_id: 1, host: '192.168.101.116:5281'},
{_id: 2, host: '192.168.105.41:5281'}]
}
rs.initiate(config);
启动shard2节点:
别在三台执行以下命令
#chown -R mongo:mongo /data01/mongo_shard2.conf
#mkdir -p /data01/shard/shard2
#chown -R mongo:mongo /data01/shard/shard2
#cat /data01/mongo_shard2.conf
dbpath = /data01/shard/shard2
port = 5282
logpath = /data01/shard/shard2/mongodb_5282.log
logappend = true
fork = true
noauth = true
nounixsocket = true
directoryperdb = true
journal = true
journalCommitInterval = 40
profile = 0
nohttpinterface = true
nssize = 1000
oplogSize = 1024
storageEngine = wiredTiger
wiredTigerJournalCompressor = zlib
切换到mongo用户执行
#su - mongo
$numactl --interleave=all mongod --replSet shard2 -f /data01/mongo_shard2.conf
启动好后,连接任何一个sharding节点,初始化replica set
mongo 127.0.0.1:5282/admin
config = {_id: 'shard2', members: [
{_id: 0, host: '192.168.101.115:5282'},
{_id: 1, host: '192.168.101.116:5282'},
{_id: 2, host: '192.168.105.41:5282'}]
}
rs.initiate(config);
启动配置服务config server:
分别在三台机器都执行创建config层
#echo never > /sys/kernel/mm/transparent_hugepage/enabled
#echo never > /sys/kernel/mm/transparent_hugepage/defrag
#chown -R mongo:mongo /data01/mongo.conf
#mkdir -p /data01/shard/config
#chown -R mongo.mongo /data01/shard/config
#cat /data01/mongo.conf
dbpath = /data01/shard/config
port = 5888
logpath = /data01/shard/config/mongodb_5888.log
logappend = true
fork = true
noauth = true
nounixsocket = true
directoryperdb = true
journal = true
journalCommitInterval = 40
profile = 0
nohttpinterface = true
nssize = 1000
oplogSize = 1024
storageEngine = wiredTiger
wiredTigerJournalCompressor = zlib
切换到mongo用户启动:
#su - mongo
$numactl --interleave=all mongod --configsvr -f /data01/mongo.conf
启动route服务:
分别在三台服务器启动mongos服务
#chown -R mongo:mongo /data01/mongos.conf
#mkdir -p /data01/shard/mongos
#chown -R mongo.mongo /data01/shard/mongos
#vi /data01/mongos.conf
nssize = 1000
oplogSize = 1024
storageEngine = wiredTiger
port = 6000
logpath = /data01/shard/mongos/mongodb_6000.log
logappend = true
fork = true
noauth = true
nounixsocket = true
nohttpinterface = true
注意:这里不需要dbpath等存储相关的一些参数,因为mongos只是一个入口,不需要太多的配置参数,否则会报错。
切换到mongo用户启动:
#su - mongo
$numactl --interleave=all mongos --configdb 192.168.101.115:5888,192.168.101.116:5888,192.168.105.41:5888 -f /data01/mongos.conf
连接到任何一个mongos
mongo 127.0.0.1:6000/admin
添加sharding节点
如果添加的节点是一个单节点不是replica set
db.runCommand({addShard: "hostname:port"})
sh.addShard("hostname:port");
如果是replica set,那么就是我们实例中的这样,以下是2种方法,第一种方法后面name和maxsize是可选项,不写也可以。
mongos> db.runCommand( { addshard : "shard1/192.168.101.115:5281,192.168.101.116:5281,192.168.105.41:5281",name:"s1",maxsize:20480} );
mongos> sh.addShard("shard2/192.168.101.115:5282,192.168.101.116:5282,192.168.105.41:5282");
{ "shardAdded" : "shard2", "ok" : 1 }
注释:
Name:用于指定每个shard的名字,不指定的话系统将自动分配
maxSize:指定各个shard可使用的最大磁盘空间,单位megabytes
列出分片:可以看到通过以上命令添加的分片,第一个s1是自定义的名称,第二个shard2是系统自动生成。
mongos> db.runCommand( { listshards : 1 } );
{
"shards" : [
{
"_id" : "s1",
"host" : "shard1/192.168.101.115:5281,192.168.101.116:5281,192.168.105.41:5281"
},
{
"_id" : "shard2",
"host" : "shard2/192.168.101.115:5282,192.168.101.116:5282,192.168.105.41:5282"
}
],
"ok" : 1
}
上传文件
dd if=/dev/zero of=test.txt bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 8.03571 s, 127 MB/s
du -sh test.txt
977M test.txt
mongo@db-192-168-101-115-> mongofiles -h 127.0.0.1 --port 6000 -d testdb put test.txt
2016-07-25T17:35:49.321+0800 connected to: 127.0.0.1:6000
added file: test.txt
打开分片功能:
db.runCommand({enableSharding:"testdb"});
db.runCommand({"shardcollection" : "testdb.fs.files", "key" : {"_id" : 1}});
db.runCommand({"shardcollection" : "testdb.fs.chunks", "key" : {"_id" : 1}});
db.fs.chunks.stats();
{
"sharded" : true,
"capped" : false,
"ns" : "testdb.fs.chunks",
"count" : 3923,
"size" : 1024243303,
"storageSize" : 64442368,
"totalIndexSize" : 225280,
"indexSizes" : {
"_id_" : 114688,
"files_id_1_n_1" : 110592
},
"avgObjSize" : 261086.74560285496,
"nindexes" : 2,
"nchunks" : 41,
"shards" : {
"s1" : {
"ns" : "testdb.fs.chunks",
"count" : 1931,
"size" : 503968759,
"avgObjSize" : 260988,
"storageSize" : 31707136,
"capped" : false,..............................
"shard2" : {
"ns" : "testdb.fs.chunks",
"count" : 1992,
"size" : 520274544,
"avgObjSize" : 261182,
"storageSize" : 32735232,
"capped" : false.......................
}
这里我截取了部分需要看的数据,可以看到已经能分到了2个分片上面,一个"size" : 503968759,一个"size" : 520274544,