hadoop HA+zookeeper 配置完,启动服务器,提示错误?

dandongsoft 2016-09-21
hadoop HA 配置完了,报个
2016-09-21 10:42:06,197 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered.
The reported blocks 18 has reached the threshold 0.9990 of total blocks 19. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 29 seconds.
2016-09-21 10:42:06,198 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-d8fb5a35-467c-43d5-9566-653d2f1b524f,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale
2016-09-21 10:42:06,198 INFO BlockStateChange: BLOCK* processReport: from storage DS-d8fb5a35-467c-43d5-9566-653d2f1b524f node DatanodeRegistration(10.0.1.181, datanodeUuid=9579271e-bc7c-4ef2-a6c5-c76ad3af68c5, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-ee9df79c-977c-449c-affb-a380783fcb10;nsid=999814669;c=0), blocks: 19, hasStaleStorages: false, processing time: 2 msecs
2016-09-21 10:42:06,409 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(10.0.1.179, datanodeUuid=61b005eb-c4d3-49c2-9209-4a1061061a78, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-ee9df79c-977c-449c-affb-a380783fcb10;nsid=999814669;c=0) storage 61b005eb-c4d3-49c2-9209-4a1061061a78
2016-09-21 10:42:06,410 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2016-09-21 10:42:06,410 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.0.1.179:50010
2016-09-21 10:42:06,437 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2016-09-21 10:42:06,437 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-c872e244-a129-460f-927a-ef379fbdecaa for DN 10.0.1.179:50010
2016-09-21 10:42:06,452 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-c872e244-a129-460f-927a-ef379fbdecaa,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale
2016-09-21 10:42:06,452 INFO BlockStateChange: BLOCK* processReport: from storage DS-c872e244-a129-460f-927a-ef379fbdecaa node DatanodeRegistration(10.0.1.179, datanodeUuid=61b005eb-c4d3-49c2-9209-4a1061061a78, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-ee9df79c-977c-449c-affb-a380783fcb10;nsid=999814669;c=0), blocks: 7, hasStaleStorages: false, processing time: 0 msecs
2016-09-21 10:42:16,575 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2016-09-21 10:42:16,575 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:337)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
2016-09-21 10:42:16,577 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2016-09-21 10:42:16,586 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2016-09-21 10:42:16,682 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Successfully started new epoch 21
2016-09-21 10:42:16,682 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Beginning recovery of unclosed segment starting at txid 1146
2016-09-21 10:42:16,732 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Recovery prepare phase complete. Responses:
10.0.1.181:8485: segmentState { startTxId: 1146 endTxId: 1146 isInProgress: true } lastWriterEpoch: 20 lastCommittedTxId: 1145
10.0.1.180:8485: segmentState { startTxId: 1146 endTxId: 1146 isInProgress: true } lastWriterEpoch: 20 lastCommittedTxId: 1145
2016-09-21 10:42:16,733 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Using longest log: 10.0.1.181:8485=segmentState {
  startTxId: 1146
  endTxId: 1146
  isInProgress: true
}
lastWriterEpoch: 20
lastCommittedTxId: 1145




,啥原因造成的?
Global site tag (gtag.js) - Google Analytics