icon-cookie
The website uses cookies to optimize your user experience. Using this website grants us the permission to collect certain information essential to the provision of our services to you, but you may change the cookie settings within your browser any time you wish. Learn more
I agree
richard yuwen
329 articles
My Web Markups - richard yuwen
27 annotations
  • 评分函数
  • GPU卡的连接拓扑
  • 负载局限在卡上绑定的容器数
  • Allocate的时候调用Redis的信息查看负载情况
  • 外部数据库存放每个节点的GPU负载信息
  • NVML获取GPU卡的实时状态
  • 任务创建时的真实的GPU负载来动态的决定挂载GPU的id
  • 虚拟id到真实id的映射不能够是通过一个简单的线性映射关系
  • GPU负载的大体上的平衡
  • 伪造出更多虚拟的GPU ID给K8s
  • 没有考虑GPU的亲和性,
  • 提供共享机制最大化资源的使用
  • 显存分配的策略
  • 没有任何gpu虚拟化
  • llocate将在容器创建被调用,用来返回一些能够使用主机上该资源的特殊配置,比如一些环境变量,再将这些信息给到Kubelet,并在容器启动的时候传给容器
  • vendor-domain/resource
  • K8s注册该资源
  • Device-Plugin机制本质上是一个RPC服务
  • AMD GPU等
  • RDMA设备
  • NVIDIA_VISIBLE_DEVICES=0,1
  • 环境变量来指定将要挂载的GPU设备
  • 多个Nvidia-Docker可以挂载同一个GPU
  • 主机侧的Nvidia-Driver
  • libnvidia-container暴露的接口进行交互
  • Runc会调用一个叫做nvidia-container-runtime-hook的hook
  • GPU的支持放入一个兼容OCI标准的运行时库拓展libnvidia-container中
  • Containerd则包装了Runc和其它功能如生命周期管理等,以Daemon的形式运行在主机
  • Nvidia-Docker来做AI系统环境已经是非常主流的做法
  • 原生Nvidia-Device-Plugin
  • K8s中将GPU作为拓展资源调度的Device-Plugin机制
  • 容器GPU挂载
  • GPU调度的通用流程
  • 没有考虑GPU卡之间的通道亲和性
  • 每一块GPU同时最多只能被一个容器使用
  • Device-Plugin的方式来增加对默认资源(CPU,Memory等)之外的设备
  • 借助Kubernetes来管理Nvidia-Docker,使得GPU任务的分配更加简单和合理,目前已成为几乎所有主流的AI算力平台的方案
  • 容器粒度来管理和使用GPU要比从主机角度容易很多
  • Nvidia公司为Docker写的Runtime
  • Nvidia的GPU的利用
40 annotations
  • CRI + containerd ShimV2 revolution
  • Container Runtime management engine
  • Sigma/Kubernetes
  • lower-layer Container Runtime
  • CRI + containerd shimv2
  • CRI is the first calling interface in Kubernetes to be divided into plug-ins
  • remove and decouple the complex features that are originally invasive to the main code from the core library one by one by dividing them into different interfaces and plug-ins
  • how to connect containerd to the kata container
  • implementation of Shimv2 API
  • kata-Containerd-Shimv2
  • container-shim-v2 in Sandbox
  • a containerd shim
  • specify a shim for each Pod
  • containerd shim for each container
  • make KataContainers follow containerd
  • standard interface between the CRI shim and the containerd runtime
  • Containerd ShimV2
  • CRI-O
  • reuse the existing CRI shims
  • What can a CRI shim do? It can translate CRI requests into Runtime APIs
  • CRI shim
  • Dockershim
  • maintenance
  • we do not want a project like Docker to have to know what a Pod is and expose the API of a Pod
  • Containerd-centric API
  • Container Runtime Interface
  • multi-tenant
  • security
  • Kernel version run by your container is completely different from that run by the Host machine
  • each Pod now has an Independent kernel
  • the more layers you build here, the worse your container performance is
  • SECCOMP
  • secure Container Runtime
  • we are concerned about security
  • each Pod like the KataContainer is a lightweight virtual machine with a complete Linux kernel
  • a compressed package of your program + data + all dependencies + all directory files
  • the Container Image
  • the Container Runtime
  • runC that helps you set up these namespaces and cgroups, and helps you chroot, building a container required by an application
  • binding operation
  • NodeName field of the Pod object
  • Pods are created, instead of containers
  • the designs of Kubernetes CRI and Containerd ShimV2
  • KataContainers
  • RuntimeClass
  • ShimV2
  • container runtime
  • CRI
  • design and implementation of key technical features
49 annotations
47 annotations
  • io.latency
  • You protect workloads with io.latency by specifying a latency target (e.g., 20ms). If the protected workload experiences average completion latency longer than its latency target value, the controller throttles any peers that have a more relaxed latency target than the protected workload. The delta between the prioritized cgroup's target and the targets of other cgroups is used to determine how hard the other cgroups are throttled: If a cgroup with io.latency set to 20ms is prioritized, cgroups with latency targets <= 20ms will never be throttled, while a cgroup with 50ms will get throttled harder than a cgroup with a 30ms target. Interface The interface for io.latency is in a format similar to the other controllers: MAJOR:MINOR target=<target time in microseconds> When io.latency is enabled, you'll see additional stats in io.stat: depth=<integer>—The current queue depth for the group. avg_lat=<time in microseconds>—The running average IO latency for this group. This provides a general idea of the overall latency you can expect for this workload on the specified disk. Note: All cgroup knobs can be configured through systemd. See the systemd.resource-control documentation for details. Using io.latency The limits are applied only at the peer level in the hierarchy. This means that in the diagram below, only groups A, B, and C will influence each other, and groups D and F will influence each other. Group G will influence nobody. Thus, a common way to configure this is to set io.latency in groups A, B, and C. Configuration strategies Generally you don't want to set a value lower than the latency your device supports. Experiment to find the value that works best for your workload: Start at higher than the expected latency for your device, and watch the avg_lat value in io.stat for your workload group to get an idea of the latency during normal operation. Use this value as a basis for your real setting: Try setting it, for example, around 20% higher than the value in io.stat. Experimentation is key here since avg_lat is a running average and subject to statistical anomalies. Setting too tight of a control (i.e., too low of a latency target) provides greater protection to a workload, but it can come at the expense of overall system IO overhead if other workloads get throttled prematurely. Another important factor is that hard disk IO latency can fluctuate greatly: If the latency target is too low, other workloads can get throttled due to normal latency fluctuations, again leading to sub-optimal IO control. Thus, in most cases then, you'll want to set the latency target higher than expected latency to avoid unnecessary throttling—the only question is by how much. Two general approaches have proven most effective: Setting io.latency higher (20-25%) than the usual expected latency. TThis provides a tighter protection guarantee for the workload. However, the tighter control can sometimes mean the system pays more in terms of IO overhead, which leads to lower system-wide IO utilization. A setting like this can be effective for systems with SSDs. Setting io.latency to several times higher than the usual expected latency, especially for hard disks. A hard disk's usual uncontended completion latencies are between 7 and 20ms, but when contention occurs, the completion latency balloons quickly, easily reaching 10 times normal. Because the latency is so volatile, workloads running on hard disks are usually not sensitive to small swings in completion latency; things break down only in extreme conditions when latency jumps several times higher (which isn't difficult to trigger). Effective protection can be achieved in cases like this by setting a relaxed target on the protected group (e.g., 50 or 75ms), and a higher setting for lower priority groups (e.g., an additional 25ms over the higher priority group). This way, the workload can have reasonable protection without significantly compromising hard disk utilization by triggering throttling when it's not necessary. How throttling works io.latency is work conserving: as long as everybody's meeting their latency target, the controller doesn't do anything. Once a group starts missing its target it begins throttling any peer group that has a higher target than itself. This throttling takes two forms: Queue depth throttling—This is the number of outstanding IO's a group is allowed to have. The controller will clamp down relatively quickly, starting at no limit and going all the way down to 1 IO at a time. Artificial delay induction—There are certain types of IO that can't be throttled without possibly affecting higher priority groups adversely. This includes swapping and metadata IO. These types of IO are allowed to occur normally, but they are "charged" to the originating group. Once the victimized group starts meeting its latency target again, it will start unthrottling any peer groups that were throttled previously. If the victimized group simply stops doing IO the global counter will unthrottle appropriately. fbtax2 IO controller configuration As discussed previously, the goal of the fbtax2 cgroup hierarchy was to protect workload.slice. In addition to the memory controller settings, the team found that IO protections were also necessary to make it all work. When memory pressure increases, it often translates into IO pressure. Memory pressure leads to page evictions: the higher the memory pressure, the more page evictions and re-faults, and therefore more IOs. It isn’t hard to generate memory pressure high enough to saturate a disk with IOs, especially the rotating hard disks that were used on the machines in the fbtax2 project. To correct for this, the team used a strategy similar to strategy 2 described above: they prioritized workload.slice by setting its io.latency to higher than expected, to 50ms. This provides more protection for workload.slice than for system.slice, whose io.latency is set to 75ms. When workload.slice has been delayed by lack of IO past its 50ms threshold, it gets IO priority: the kernel limits IO from system.slice and reallocates it to workload.slice so the main workload can keep running. hostcritical.slice was given a similar level of protection as workload.slice since any problems there can also impact the main workload. In this case it used memory.min to guarantee it will have enough to keep running. Though they knew system.slice needed lower IO priority, the team determined the 75ms number through trial and error, modifying it repeatedly until they achieved the right balance between protecting the main workload and ensuring the stability of system.slice. In the final installment of this case study, we'll summarize the strategies used in the fbtax2 project, and look at some of the utilization gains that resulted in Facebook's server farms. ← Memory Controller Strategies and ToolsCPU Controller →cgroup2 IO controller enhancementsInterface filesProtecting workloads with io.latencyInterfaceUsing io.latencyConfiguration strategiesHow throttling works
  • This is where you specify IO limits
  • O
  • accounting of all IOs per-cgroup
  • IOPS
  • system has the flexibility to limit IO to low priority workloads
7 annotations
  • a memory-intensive process
  • out-of-the-box improvement over the kernel OOM killer
  • The kernel OOM handler’s main job is to protect the kernel
  • oomd
  • rejects a few and continues to run
  • Load shedding
  • oad shedding is a technique to avoid overloading and crashing a system by temporarily rejecting new requests. The idea is that all loads will be better served if the system rejects a few and continues to run, instead of accepting all requests and crashing due to lack of resources. In a recent test, a team at Facebook that runs asynchronous jobs, called Async, used memory pressure as part of a load shedding strategy to reduce the frequency of OOMs. The Async tier runs many short-lived jobs in parallel. Because there was previously no way of knowing how close the system was to invoking the OOM handler, Async hosts experienced excessive OOM kills. Using memory pressure as a proactive indicator of general memory health, Async servers can now estimate, before executing each job, whether the system is likely to have enough memory to run the job to completion. When memory pressure exceeds the specified threshold, the system ignores further requests until conditions stabilize. The chart shows how async responds to changes in memory pressure: when memory.full (in orange) spikes, async sheds jobs back to the async dispatcher, shown by the blue async_execution_decision line. The results were signifcant: Load shedding based on memory pressure decreased memory overflows in the Async tier and increased throughput by 25%. This enabled the Async team to replace larger servers with servers using less memory, while keeping OOMs under control. oomd - memory pressure-based OOM oomd is a new userspace tool similar to the kernel OOM handler, but that uses memory pressure to provide greater control over when processes start getting killed, and which processes are selected. The kernel OOM handler’s main job is to protect the kernel; it’s not concerned with ensuring workload progress or health. Consequently, it’s less than ideal in terms of when and how it operates: It starts killing processes only after failing at multiple attempts to allocate memory, i.e., after a problem is already underway. It selects processes to kill using primitive heuristics, typically killing whichever one frees the most memory. It can fail to start at all when the system is thrashing: memory utilization remains within normal limits, but workloads don't make progress, and the OOM killer never gets invoked to clean up the mess. Lacking knowledge of a process's context or purpose, the OOM killer can even kill vital system processes: When this happens, the system is lost, and the only solution is to reboot, losing whatever was running, and taking tens of minutes to restore the host. Using memory pressure to monitor for memory shortages, oomd can deal more proactively and gracefully with increasing pressure by pausing some tasks to ride out the bump, or by performing a graceful app shutdown with a scheduled restart. In recent tests, oomd was an out-of-the-box improvement over the kernel OOM killer and is now deployed in production on a number of Facebook tiers. Case study: oomd at Facebook See how oomd was deployed in production at Facebook in this case study looking at Facebook's build system, one of the largest services running at Facebook. oomd in the fbtax2 project As discussed previously, the fbtax2 project team prioritized protection of the main workload by using memory.low to soft-guarantee memory to workload.slice, the main workload's cgroup. In this work-conserving model, processes in system.slice could use the memory when the main workload didn't need it. There was a problem though: when a memory-intensive process in system.slice can no longer take memory due to the memory.low protection on workload.slice, the memory contention turns into IO pressure from page faults, which can compromise overall system performance. Because of limits set in system.slice's IO controller (which we'll look at in the next section of this case study) the increased IO pressure causes system.slice to be throttled. The kernel recognizes the slowdown is caused by lack of memory, and memory.pressure rises accordingly. oomd monitors the pressure, and once it exceeds the configured threshold, kills one of the processes—most likely the memory hog in system.slice—and resolves the situation before the excess memory pressure crashes the system. This behavior ← Memory ControllerIO Controller →Memory overcommitPressure-based load sheddingoomd - memory pressure-based OOMCase study: oomd at Facebook
  • outweigh the overhead of occasional OOM events
  • demand exceeds the total memory available
  • Overcommitting on memory—promising more memory for processes than the total system memory—is a key technique for increasing memory utilization
10 annotations
  • 抓手、生态、闭环、拉齐、梳理、迭代、owner意识
  • 能说、会写、善做是对职场人的三大要求
  • 撕逼甩锅邀功抢活这些闹心的事儿基本也不会缺席
  • PPT、沟通、表达、时间管理、设计、文档等方面的能力
  • 报警配置和监控梳理
  • 良好的规划能力和清晰的演进蓝图
  • 做系统建设要有全局视野
  • 有的人能把一个小盘子越做越大
  • 想到了leader没想到的地方
  • 直接去找对应的人聊,让别人讲一遍自己基本就全懂了,这效率比看文档看代码快多了
  • 向上沟通反馈
  • owner意识
  • 主动承担任务,主动沟通交流,主动推动项目进展,主动协调资源,主动向上反馈,主动创造影响力
  • 主动承担,及时交流反馈
  • 主动跳出自己的舒适区,感到挣扎与压力的时候,往往是黎明前的黑暗,那才是成长最快的时候
  • 强迫自己跳出自己的安逸区
  • 积极学习,保持技术能力、知识储备与工作年限成正比,这到了35岁哪还有什么焦虑呢
  • 架构先行于业务
  • 技术同学该如何培养产品思维,引导产品走向
  • 系统建设?系统核心能力,系统边界,系统瓶颈,服务分层拆分,服务治理
  • 代码层,可以做的事情更多了,资源池化、对象复用、无锁化设计、大key拆分、延迟处理、编码压缩、gc调优还有各种语言相关的高性能实践
  • 在架构层,可以做缓存、预处理、读写分离、异步、并行等等
  • 术到道的过程
  • 知识还是零星的几点,不成体系,不仅容易遗忘,而且造成自己视野比较窄,看问题比较局限
24 annotations
21 annotations
  • Publisher: IEEE Cite This Cite This PDF 5 Author(s) Arun Kejariwal ; Winston Lee ; Owen Vallis ; Jordan Hochenbaum ; Bryce Yan All Authors Sign In or Purchase to View Full Text 267 Full Text Views Email Export to Collabratec Alerts Alerts Manage Content Alerts Add to Citation Alerts Abstract Document Sections I. Introduction II. Chiffchaff: Design and Use III. Previous Work IV. Conclusion Authors Figures References Keywords Metrics More Like This Footnotes Download PDF Download Citation View References Email Request Permissions Export to Collabratec Alerts Abstract: `Anywhere, Anytime and Any Device' is often used to characterize the next generation Internet. Achieving the above in light of the increasing use of the Internet worldwid... View more Metadata Abstract: `Anywhere, Anytime and Any Device' is often used to characterize the next generation Internet. Achieving the above in light of the increasing use of the Internet worldwide, especially fueled by mobile Internet usage, and the exponential growth in the number of connected devices is non-trivial. In particular, the three As require development of infrastructure which is highly available, performant and scalable. Additionally, from a corporate standpoint, high efficiency is of utmost importance. To facilitate high availability, deep observability of physical, system and application metrics and analytics support, say for systematic capacity planning, is needed. Although there exist many commercial services to assist observability in the data center, public/private cloud, they lack analytics support. To this end, we developed a framework at Twitter, called Chiffchaff, to drive capacity planning in light of a growing user base. Specifically, the framework provides support for automatic mining of application metrics and subsequent visualization of trends (for example, Week-over-Week (WoW), Month-overMonth (MoM)), data distribution etcetera. Further, the framework enables deep diving into traffic patterns, which can be used to guide load balancing in shared systems. We illustrate the use of Chiffchaff with production traffic. Published in: 2013 IEEE 16th International Conference on Computational Science and Engineering Date of Conference: 3-5 Dec. 2013 Date Added to IEEE Xplore: 06 March 2014 Electronic ISBN: 978-0-7695-5096-1 INSPEC Accession Number: 14145932 DOI: 10.1109/CSE.2013.133 Publisher: IEEE Conference Location: Sydney, NSW, Australia googletag.cmd.push(function() { googletag.display("div-gpt-ad-1527780343424-0"); }); googletag.cmd.push(function() { googletag.display("div-gpt-ad-1527780523694-0"); }); Contents I. IntroductionRecent years have seen robust growth - 8% YoY - in Internet usage worldwide [1]. The proliferation of mobile devices such as smartphones and tablets has further fueled Internet usage. Although this lends itself as a huge opportunity for application developers and monetization via advertising et cetera (recently, Berg Insight forecasted that the direct App store revenue would grow at a compound annual growth rate of 40.7 % to reach C=8.8{\rm C}\!\!\!\!\!{=}8.8 billion by 2015 (from C=1.6{\rm C}\!\!\!\!\!{=}1.6 billion in 2010)), it exerts tremendous pressure on the infrastructure services. To this end, in [2], Barroso and Hölzle argue: Sign in to Continue Reading Authors Figures References Keywords Metrics Footnotes googletag.cmd.push(function() { googletag.display("div-gpt-ad-1531324580046-0"); }); More Like This Delay-Aware Resource Allocation for Data Analysis in Cloud-Edge System 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom) Published: 2018 Multi-objective Optimization Based Virtual Resource Allocation Strategy for Cloud Computing 2012 IEEE/ACIS 11th International Conference on Computer and Information Science Published: 2012 Show More Top Organizations with Patents on Technologies Mentioned in This Article googletag.cmd.push(function() { googletag.display("div-gpt-ad-1531324238723-0"); });
  • Week-over-Week (WoW
  • Month-overMonth (MoM
3 annotations