如何缩放Orion GE?

Ram*_*ras 5 fiware-orion fiware fiware-cygnus fiware-cosmos

我已在FILAB中部署了Orion实例,并已配置了Cygnus inyector,以便在Cosmos中存储信息。

但是...让我们想象一下这样一种场景,其中实体的数量急剧增加。在这种假设的情况下,仅Orion GE的一个实例是不够的,因此有必要部署更多实例。

比例程序是什么?考虑到最大配额为:

VM实例:5个VCPU:10个硬盘:100 GB内存:10240 MB公用IP:1

我知道配额可能会更改,但是免费帐户限额是多少?

Cosmos头节点中的硬盘限制是多少?(理论上为5GB配额)

是否可以通过单个公共IP部署更多Orion Context Broker实例,还是有必要要求多个公共IP?怎么样?

总而言之,我要求提供有关拟议方案的扩展程序和免费帐户限制(可能的最大配额)的信息。

先感谢您。亲切的问候。

拉蒙

fga*_*lan 3

Regarding Orion scalability, it could involve two dimensions:

  • Scalability in the number of entities. In this case, the scarce resource is the DB, so you would need to scale the MongoDB layer. The usual procedure to scale MongoDB is using shards, please check MongoDB official documentation abouit it.

  • Scalability in the operation requests to manage such entities. In this case, you can use additional Orion nodes (each one running in a separate VM, plus an additional VM in front of them running the load balancer software to distribute the load among Orion nodes). Orion is a stateless process that can run in such horizontal scaling configuration as long as: 1) you don't use ONTIMEINTERVAL subscriptions (see details in this post) (see UPDATE2 note below), 2) you have to configure the -subCacheIval CLI parameter with a small enough value that ensures eventual consistency (basically, the value of the -subCacheIval parameter is the maximum time that may pass from a subscriptions with entity patterns is done until it is consolidated in all the Orion nodes).

在此输入图像描述

In any case, you would need additional VMs. You don't need additional IPs, as long as the system only needs a public IP (the one assigned to the load balancer) and all the other communications can be done internally. Cloud quota information has been already answered by @flopez in another post.

Ragarding the persistence of data in Cosmos through Cygnus, the same way you create a farm of Orion processes you may add more Cygnus processes in charge of receiving notifications from the Orion farm. Simply define a mapping strategy for all you entities, defining subscriptions about which entities are going to be notified to which Cygnus process A, which other to Cygnus process B, etc. The problem is on the connectivity between these Cygnus farm and the Global Instance of Cosmos (located in the Internet). Assuming these cygnus farm is running on top of VMs with private addressing, you must install some kind of proxy in another VM in order to access Cosmos.

About the HDFS quota, yes, it is 5 GB by default, but can be changed on demand. It worths saying a new HDFS cluster will be released in the short-term, having a higher capacity of storage.

UPDATE: a more detailed workflow explanation for the subscription-update-notification case is provided in this separated Q&A post.

UPDATE2: ONTIMEINTERVAL subscriptions were removed in Orion 1.0.0 (March 2016).