Hadoop不能走IPv6...之後新版會不會支援偶就不知啦XD
ubuntu現在新版的default就是IPv6..記得要先關掉
[Cancel IPv6 on Ubuntu]
1. vim /etc/sysctl.conf
2. add code
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
3. execute setting
sudo sysctl -p
4. Test result(It should be empty)
ip a | grep inet6
//為了方便~ 我把hadoop相關套件一律裝到/etc/hadoop下~ hdfs檔案位置放在HADOOP_HOME/tmp下
//另外也為了操作方便~ 先把那邊的路徑owner改給hadoop user..省著一直問sudo超麻煩
//先做單機測試與安裝
[Hadoop]
1. Download/SFTP tar.gz to /home/hadoop
2. Copy and unzip
cd /etc
sudo mkdir hadoop
sudo chown -R hadoop hadoop
cd hadoop
cp /home/hadoop/hadoop-0.20.2.tar.gz .
tar -zxv -f hadoop-0.20.2.tar.gz
3. setting
cd hadoop-0.20.2
vim conf/hadoop-env.sh
//SUN-JDK路徑
JAVA_HOME=/etc/jdk/jdk1.6.0_32/
Test hadoop cmd
bin/hadoop
4. setting core-site.xml
vim conf/core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/etc/hadoop/hadoop-0.20.2/tmp</value>
</property>
mkdir tmp
5. setting hdfs-site.xml
vim conf/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
6.setting mapred-site.xml
vim conf/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
7. ssh localhost without accept key
#test
ssh localhost
#set
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
8. Download Master ssh key to store
9. Run Single Node
bin/hadoop namenode -format
bin/start-all.sh
10. Check status //他起來到網頁看到一個會活的node或jobnode..要很久(尤其多個node時)...可以先做別的事再來看
http://{hadoop ip}:50070/ (DataNode is count of slaves)
http://{hadoop ip}:50030/
[Hadoop Exception Handle]
//datanode有問題~ 通常rm砍掉後重新format就會過
NOTE: if Incompatible namespaceIDs
rm -R tmp
mkdir tmp
bin/hadoop namenode -format
//status 網頁開不起來時~檢查該聽的port有沒有問題
NOTE: if Connection Fail
Check hosts and ip in the xml
netstat -l (on Master u can see the port bind vm1)
//HBase只是裝來給sqoop reference用的~沒設定也沒差
[HBase]
0. download version: hbase-0.92.1.tar.gz
FTP to host
1. cd /etc/hadoop
cp /home/hadoop/hbase-0.92.1.tar.gz .
tar -zxv -f hbase-0.92.1.tar.gz
[Sqoop]
0. download version: sqoop-1.4.1-incubating__hadoop-0.20.tar.gz
FTP to host, chg name to short name-->sqoop-1.4.1.tar.gz
1. cd /etc/hadoop
cp /home/hadoop/sqoop-1.4.1.tar.gz .
tar -zxv -f sqoop-1.4.1.tar.gz
#cp -R sqoop-1.4.1-incubating__hadoop-0.20/. hadoop-0.20.2/sqoop-1.4.1/
2. set PATH
vim bin/configure-sqoop
#add on top
#notice hadoop must use sun java jdk!!
HADOOP_HOME=/etc/hadoop/hadoop-0.20.2/
HBASE_HOME=/etc/hadoop/hbase-0.92.1/
JAVA_HOME=/etc/jdk/jdk1.6.0_32/
//等這些單機裝好測好後~ 就可以將目前的VM做樣本~ 接著就是複雜N份~
//是也可以順便先把Hadoop實做出來的先copy過去啦~不過就~嗯=_="總是會改的~之後還是每一台要上
//ssh without key這件事~ 只要master跟其他的sync完就可以了~ 簡單的說只要讓master可以ssh免pwd去下指令就可以了~~
//masters和slaves我記得是~ master那台兩個檔都要改~ slaves只要去改master就可以~ 不過為了統一~一起都改一樣是也沒差一"一a
[Hadoop Multi PC]
0. Save Basic VM Img
{for each VM}
1. Create New VM..Deploy Basic VM Img..start
2. sync time oclock
sudo ntpdate time.stdtime.gov.tw
3. Modify PC HostName
sudo vim /etc/hostname
sudo vim /etc/hosts
sudo reboot
#Remember the VM IP
4. hosts add all ip/hostname
sudo vim /etc/hosts
(Notice..remove 127.0.1.1, use outer ip with hostname of master)
5. ssh all without key..(ssh localhost and ssh h1~hN) <--do this only on the Master
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys (ssh localhost need it)
ssh-copy-id -i ~/.ssh/id_dsa.pub hadoop@h2 (@h1~@hN)
6. set conf again (only Master need to reset masters and slaves)
cd /etc/hadoop/hadoop-1.0.0
vim conf/masters (key in hostname of Master)
h0
vim conf/slaves (add all host include self)
h1
h2
h3
vim conf/core-site.xml (update hostname)
<value>http://h0:9000</value>
vim conf/hdfs-site.xml (update hostname)
<value>2<value>
vim conf/mapred-site.xml
<value>http://h0:9001</value>
7. Start Hadoop on Master
bin/stop-all.sh
bin/hadoop namenode -format
bin/start-all.sh
8. Check status
http://192.168.22.111:50070/ (DataNode is count of slaves)
http://192.168.22.111:50030/
9. #test1
//bin/hadoop jar hadoop-examples-1.0.0.jar pi 10 5000000
bin/hadoop jar hadoop-0.20.2-examples.jar pi 10 500
#test2
bin/hadoop dfs -ls output
(if output exists)
bin/hadoop fs -rmr output
bin/hadoop dfs -put ./wordcount input
bin/hadoop jar hadoop-examples-1.0.0.jar wordcount input output
最後就是測試啦~可以用現成的範例檔測就好了~
多個node有成功時~ 可以在50030那邊去看~也會有進度~
看log可以多利用50070的logs/~ 比cmd上的黑白mode好看好選多了(名字很長又很像=_=")
話說偶最早是用Hadoop 1.0去裝~但是他有個網路問題不知道是怎樣~ 當時不知道是他開發的bug還怎樣(那時候看到1.0剛出很開心的去抓~就當白老鼠了...)...多台的溝通就是跑不起來只能單台...這樣很有問題Orz...
最後降級至穩定版..至少0.20~照著文件跟網路文章做就很順~
看到example程式沒啥問題的跑完後~ 就可以開始開發應用了!!
沒有留言:
張貼留言