修改hadoop/hdfs日志級(jí)別

描述:

創(chuàng)新互聯(lián)公司致力于互聯(lián)網(wǎng)網(wǎng)站建設(shè)與網(wǎng)站營(yíng)銷(xiāo),提供網(wǎng)站設(shè)計(jì)制作、做網(wǎng)站、網(wǎng)站開(kāi)發(fā)、seo優(yōu)化、網(wǎng)站排名、互聯(lián)網(wǎng)營(yíng)銷(xiāo)、成都小程序開(kāi)發(fā)、公眾號(hào)商城、等建站開(kāi)發(fā),創(chuàng)新互聯(lián)公司網(wǎng)站建設(shè)策劃專(zhuān)家,為不同類(lèi)型的客戶(hù)提供良好的互聯(lián)網(wǎng)應(yīng)用定制解決方案,幫助客戶(hù)在新的全球化互聯(lián)網(wǎng)環(huán)境中保持優(yōu)勢(shì)。

If a large directory is deleted and namenode is immediately restarted, there are a lot of blocks that do not belong to any file. This results in a log:

2014-11-08 03:11:45,584 INFO BlockStateChange (BlockManager.java:proce***eport(1901)) - BLOCK* proce***eport: blk_1074250282_509532 on 172.31.44.17:1019 size 6 does not belong to any file.

This log is printed within FSNamsystem lock. This can cause namenode to take long time in coming out of safemode.

One solution is to downgrade the logging level.

解決方案

tail -f /var/log/hadoop/hdfs/hdfs-namenode.log

http://<namenode:50070>/logLevel

Input "BlockStateChange" and Level is "WARN" and then click "Set Log Level" button

wait 2~3 mins. it works and performance is fine.

本文題目:修改hadoop/hdfs日志級(jí)別
標(biāo)題來(lái)源:http://www.muchs.cn/article22/ihpscc.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供虛擬主機(jī)、自適應(yīng)網(wǎng)站微信小程序、小程序開(kāi)發(fā)、網(wǎng)站設(shè)計(jì)、建站公司

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶(hù)投稿、用戶(hù)轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話(huà):028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)

外貿(mào)網(wǎng)站建設(shè)