k8s臨時(shí)內(nèi)存-創(chuàng)新互聯(lián)

如何查看CPU總占用率?
top?-bn?1?-i?-c
sar?-P?0?-u?1?5

創(chuàng)新互聯(lián)專業(yè)為企業(yè)提供茫崖網(wǎng)站建設(shè)、茫崖做網(wǎng)站、茫崖網(wǎng)站設(shè)計(jì)、茫崖網(wǎng)站制作等企業(yè)網(wǎng)站建設(shè)、網(wǎng)頁設(shè)計(jì)與制作、茫崖企業(yè)網(wǎng)站模板建站服務(wù),十余年茫崖做網(wǎng)站經(jīng)驗(yàn),不只是建網(wǎng)站,更提供有價(jià)值的思路和整體網(wǎng)絡(luò)服務(wù)。

I had a similar error. My analysis:

Pods on a same k8s node share the ephemeral storage, which (if no special configuration was used) is used by spark to store temp data of spark jobs (disk spillage and shuffle data). The amount of ephemeral storage of a node is basically the size of the available storage in your k8s node.

If some executor pods use up all of the ephemeral storage of a node, other pods will fail when they try to write data to ephemeral storage. In your case the failing pod is the driver pod, but it could have been any other pods on that node. In my case it was an executor that failed with a similar error message.

I would try to optimize the spark code first before changing the deployment configuration.

  • reduce disk spillage, shuffle write
  • split transforms if possible
  • and increase the amount of executors as the last resource :)

If you know upfront the amount of storage required in each executor, maybe you can try to set the resources?requests?(and not?limits) for ephemeral storage to right amount.

你是否還在尋找穩(wěn)定的海外服務(wù)器提供商?創(chuàng)新互聯(lián)www.cdcxhl.cn海外機(jī)房具備T級流量清洗系統(tǒng)配攻擊溯源,準(zhǔn)確流量調(diào)度確保服務(wù)器高可用性,企業(yè)級服務(wù)器適合批量采購,新人活動(dòng)首月15元起,快前往官網(wǎng)查看詳情吧

文章名稱:k8s臨時(shí)內(nèi)存-創(chuàng)新互聯(lián)
URL標(biāo)題:http://muchs.cn/article16/dsphdg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供手機(jī)網(wǎng)站建設(shè)、外貿(mào)建站、搜索引擎優(yōu)化、用戶體驗(yàn)、網(wǎng)站維護(hù)企業(yè)建站

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

成都定制網(wǎng)站建設(shè)