新聞中心
Solaris 10(x86)構(gòu)建Oracle 10g RAC之--配置系統(tǒng)環(huán)境(2)
網(wǎng)站設(shè)計(jì)、做網(wǎng)站介紹好的網(wǎng)站是理念、設(shè)計(jì)和技術(shù)的結(jié)合。創(chuàng)新互聯(lián)建站擁有的網(wǎng)站設(shè)計(jì)理念、多方位的設(shè)計(jì)風(fēng)格、經(jīng)驗(yàn)豐富的設(shè)計(jì)團(tuán)隊(duì)。提供PC端+手機(jī)端網(wǎng)站建設(shè),用營(yíng)銷(xiāo)思維進(jìn)行網(wǎng)站設(shè)計(jì)、采用先進(jìn)技術(shù)開(kāi)源代碼、注重用戶體驗(yàn)與SEO基礎(chǔ),將技術(shù)與創(chuàng)意整合到網(wǎng)站之中,以契合客戶的方式做到創(chuàng)意性的視覺(jué)化效果。
系統(tǒng)環(huán)境:
操作系統(tǒng):Solaris 10(x86-64)
Cluster: Oracle CRS 10.2.0.1.0
Oracle: Oracle 10.2.0.1.0
如圖所示:RAC 系統(tǒng)架構(gòu)
一、建立主機(jī)之間的信任關(guān)系(在所有node)
1、配置主機(jī)hosts.equiv文件
[root@node1:/]# cat /etc/hosts.equiv node1 root node1 oracle node1-vip root node1-vip oracle node1-priv root node1-priv oracle node2 root node2 oracle node2-vip root node2-vip oracle node2-priv root node2-priv oracle
2、配置Oracle用戶.rhosts文件
[oracle@node1:/export/home/oracle]$ cat .rhosts node1 root node1 oracle node1-vip root node1-vip oracle node1-priv root node1-priv oracle node2 root node2 oracle node2-vip root node2-vip oracle node2-priv root node2-priv oracle
3、啟動(dòng)相關(guān)的服務(wù),驗(yàn)證
[root@node1:/]# svcs -a |grep rlogin disabled 10:05:17 svc:/network/login:rlogin [root@node1:/]# svcadm enable svc:/network/login:rlogin [root@node1:/]# svcadm enable svc:/network/rexec:default [root@node1:/]# svcadm enable svc:/network/shell:default [root@node1:/]# svcs -a |grep rlogin online 11:37:34 svc:/network/login:rlogin [root@node1:/]# su - oracle Oracle Corporation SunOS 5.10 Generic Patch January 2005 [oracle@node1:/export/home/oracle]$ rlogin node1 Last login: Wed Jan 21 11:29:36 from node2-priv Oracle Corporation SunOS 5.10 Generic Patch January 2005
二、安裝CRS前系統(tǒng)環(huán)境的檢測(cè)(在node1)
[oracle@node1:/export/home/oracle]$ unzip 10201_clusterware_solx86_64.zip [oracle@node1:/export/home/oracle/clusterware/cluvfy]$ ./runcluvfy.sh USAGE: cluvfy [ -help ] cluvfy stage { -list | -help } cluvfy stage {-pre|-post}[-verbose] cluvfy comp { -list | -help } cluvfy comp [-verbose] [oracle@node1:/export/home/oracle/clusterware/cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "node1" Destination Node Reachable? ------------------------------------ ------------------------ node1 yes node2 yes Result: Node reachability check passed from node "node1". Checking user equivalence... Check: User equivalence for user "oracle" Node Name Comment ------------------------------------ ------------------------ node2 passed node1 passed Result: User equivalence check passed for user "oracle". Checking administrative privileges... Check: Existence of user "oracle" Node Name User Exists Comment ------------ ------------------------ ------------------------ node2 yes passed node1 yes passed Result: User existence check passed for "oracle". Check: Existence of group "oinstall" Node Name Status Group ID ------------ ------------------------ ------------------------ node2 exists 200 node1 exists 200 Result: Group existence check passed for "oinstall". Check: Membership of user "oracle" in group "oinstall" [as Primary] Node Name User Exists Group Exists User in Group Primary Comment ---------------- ------------ ------------ ------------ ------------ ------------ node2 yes yes yes yes passed node1 yes yes yes yes passed Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed. Administrative privileges check passed. Checking node connectivity... Interface information for node "node2" Interface Name IP Address Subnet ------------------------------ ------------------------------ ---------------- e1000g0 192.168.8.12 192.168.8.0 e1000g1 10.10.10.12 10.10.10.0 Interface information for node "node1" Interface Name IP Address Subnet ------------------------------ ------------------------------ ---------------- e1000g0 192.168.8.11 192.168.8.0 e1000g1 10.10.10.11 10.10.10.0 Check: Node connectivity of subnet "192.168.8.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- node2:e1000g0 node1:e1000g0 yes Result: Node connectivity check passed for subnet "192.168.8.0" with node(s) node2,node1. Check: Node connectivity of subnet "10.10.10.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- node2:e1000g1 node1:e1000g1 yes Result: Node connectivity check passed for subnet "10.10.10.0" with node(s) node2,node1. Suitable interfaces for the private interconnect on subnet "192.168.8.0": node2 e1000g0:192.168.8.12 node1 e1000g0:192.168.8.11 Suitable interfaces for the private interconnect on subnet "10.10.10.0": node2 e1000g1:10.10.10.12 node1 e1000g1:10.10.10.11 ERROR: Could not find a suitable set of interfaces for VIPs. Result: Node connectivity check failed. ---vip 網(wǎng)絡(luò)檢測(cè)失敗 Checking system requirements for 'crs'... Check: Total memory Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- node2 1.76GB (1843200KB) 512MB (524288KB) passed node1 1.76GB (1843200KB) 512MB (524288KB) passed Result: Total memory check passed. Check: Free disk space in "/tmp" dir Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- node2 3GB (3150148KB) 400MB (409600KB) passed node1 2.74GB (2875128KB) 400MB (409600KB) passed Result: Free disk space check passed. Check: Swap space Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- node2 2GB (2096476KB) 512MB (524288KB) passed node1 2GB (2096476KB) 512MB (524288KB) passed Result: Swap space check passed. Check: System architecture Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- node2 64-bit 64-bit passed node1 64-bit 64-bit passed Result: System architecture check passed. Check: Operating system version Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- node2 SunOS 5.10 SunOS 5.10 passed node1 SunOS 5.10 SunOS 5.10 passed Result: Operating system version check passed. Check: Operating system patch for "118345-03" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- node2 unknown 118345-03 failed node1 unknown 118345-03 failed Result: Operating system patch check failed for "118345-03". Check: Operating system patch for "119961-01" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- node2 119961-06 119961-01 passed node1 119961-06 119961-01 passed Result: Operating system patch check passed for "119961-01". Check: Operating system patch for "117837-05" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- node2 unknown 117837-05 failed node1 unknown 117837-05 failed Result: Operating system patch check failed for "117837-05". Check: Operating system patch for "117846-08" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- node2 unknown 117846-08 failed node1 unknown 117846-08 failed Result: Operating system patch check failed for "117846-08". Check: Operating system patch for "118682-01" Node Name Applied Required Comment ------------ ------------------------ ------------------------ ---------- node2 unknown 118682-01 failed node1 unknown 118682-01 failed Result: Operating system patch check failed for "118682-01". ---系統(tǒng)補(bǔ)丁檢測(cè)失敗 Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed Result: Group existence check passed for "dba". Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed Result: Group existence check passed for "oinstall". Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed Result: User existence check passed for "oracle". Check: User existence for "nobody" Node Name Status Comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed Result: User existence check passed for "nobody". System requirement failed for 'crs' Pre-check for cluster services setup was unsuccessful on all the nodes.
----在以上的系統(tǒng)環(huán)境檢測(cè)中,VIP網(wǎng)絡(luò)檢查失??;
如果在檢測(cè)前沒(méi)有配置VIP網(wǎng)絡(luò),可以用一下方式進(jìn)行配置;如果已經(jīng)配置過(guò),就不會(huì)檢測(cè)失敗。
配置vip network(node1):
[root@node1:/]# ifconfig e1000g0:
1
plumb up
[root@node1:/]# ifconfig e1000g0:
1
192.168.
8.13
netmask
255.255.
255.0
[root@node1:/]# ifconfig -a
lo0: flags=
2001000849
mtu
8232
index
1
inet
127.0.
0.1
netmask ff000000
e1000g0: flags=
1000843
mtu
1500
index
2
inet
192.168.
8.11
netmask ffffff00 broadcast
192.168.
8.255
ether
8
:
0
:
27
:
28
:b1:8c
e1000g0:
1
: flags=
4001000842
mtu
1500
index
2
inet
192.168.
8.13
netmask ffffff00 broadcast
192.168.
8.255
e1000g1: flags=
1000843
mtu
1500
index
3
inet
10.10.
10.11
netmask ffffff00 broadcast
10.10.
10.255
ether
8
:
0
:
27
:6e:
16
:
1
配置vip network(node2):
[root@node2:/]# ifconfig e1000g0:
1
plumb up
[root@node2:/]# ifconfig e1000g0:
1
192.168.
8.14
netmask
255.255.
255.0
[root@node2:/]# ifconfig -a
lo0: flags=
2001000849
mtu
8232
index
1
inet
127.0.
0.1
netmask ff000000
e1000g0: flags=
1000843
mtu
1500
index
2
inet
192.168.
8.12
netmask ffffff00 broadcast
192.168.
8.255
ether
8
:
0
:
27
:1f:bf:4c
e1000g0:
1
: flags=
1000843
mtu
1500
index
2
inet
192.168.
8.14
netmask ffffff00 broadcast
192.168.
8.255
e1000g1: flags=
1000843
mtu
1500
index
3
inet
10.10.
10.12
netmask ffffff00 broadcast
10.10.
10.255
ether
8
:
0
:
27
:a5:2c:db
----在以上的系統(tǒng)環(huán)境檢測(cè)中,部分補(bǔ)丁沒(méi)有安裝(可以通過(guò)Oracle官方網(wǎng)站下載,本機(jī)為測(cè)試環(huán)境暫不安裝)
文章名稱:Solaris10(x86)構(gòu)建Oracle10gRAC之--配置系統(tǒng)環(huán)境(2)
轉(zhuǎn)載源于:http://fisionsoft.com.cn/article/jjojcg.html