Mesos 和 Kubernetes 在调度上的区别

Differences in scheduling between Mesos and Kubernetes

前言:我的问题与this one有些相关,但我想更深入地讨论调度的特定方面。

除了Kubernetes的调度是中心化和Mesos的调度是去中心化之外,按照两步流程,还有哪些两个项目的调度算法之间的差异?

我用Kubernetes已经半年了,一直没用过Mesos。我了解 资源产品 的概念,但我无法在 Mesos 和 Kubernetes 调度算法之间建立比较,主要是因为我对这两种工具的实现了解不深。

我不确定这是否具有可比性。 Kubernetes 可以 运行 作为 Mesos 框架。它的调度程序描述为 here。它基于对节点的过滤和排名。

Mesos两步调度更依赖框架算法

  1. Mesos 向基于 DRF algorithm 的框架提供报价。框架也可以通过使用角色和权重来确定优先级。
  2. 框架根据提供的条件决定 运行 执行哪个任务。每个框架都可以实现自己的算法来匹配任务和报价。

附录引自https://medium.com/@ArmandGrillet/comparison-of-container-schedulers-c427f4f7421

Monolithic scheduling

Monolithic schedulers are composed of a single scheduling agent handling all the requests, they are commonly used in high-performance computing. A monolithic scheduler generally applies a single-algorithm implementation for all incoming jobs thus running different scheduling logic depending on job types is difficult. Apache Hadoop YARN [55], a popular architecture for Hadoop that delegates many scheduling functions to per-application components, is a monolithic scheduler architecture due to the fact that the resource requests from application masters have to be sent to a single global scheduler in the resource master.

Two-level scheduling

A two-level scheduler adjusts the allocation of resources to each scheduler dynamically using a central coordinator to decide how many resources each sub-cluster can have, it is used in Mesos [50] and was used for Hadoop-on-Demand (now replaced by YARN). With this architecture, the allocator avoids conflicts by offering a given resource to only one framework at a time and attempts to achieve dominant resource fairness by choosing the order and the sizes of the resources it offers. Only one framework is examining a resource at a time thus the concurrency control is called pessimistic, a strategy that is less error-prone but slower compared to an optimistic concurrency control offering a resource to many frameworks at the same time.

Shared-state scheduling

Omega grants each scheduler full access to the entire cluster, allowing them to compete in a free-for-all manner. There is no central resource allocator as all of the resource-allocation decisions take place in the schedulers. There is no central policy-enforcement engine, individual schedulers are taking decisions in this variant of the two-level scheme. By supporting independent scheduler implementations and exposing the entire allocation state of the schedulers, Omega can scale to many schedulers and works with different workloads with their own scheduling policies [54].