可扩展微服务架构中的 Rabbitmq、Redis 和 Hazlecast
Rabbitmq, Redis and Hazlecast in a scalable microservice architecture
我对微服务架构中的可扩展性有疑问:
Independent from the inter service communication style (REST HTTP or
messsage based), if a service scales, which means several replicas of
the service are going to be launched, how is a shared main memory
realized? To be more precise, how can instance1 access the memory of
instance2?
我问这个问题是因为服务的所有实例之间的共享非内存数据库可能会减慢读写过程。
Could some expert in designing scalable system architecture explain,
what exactly is the difference in using the (open source) Redis
solution or using the (open source) Hazlecast solution to this
problem?
另一种可能的解决方案:使用 Rabbitmq 设计可扩展系统:
Is it feasible to use message queues as a shared memory solution, by
sending large/medium size objects within messages to a worker queue?
感谢您的帮助。
several instances of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?
你不知道。 无状态 工作负载通过添加更多副本进行扩展。重要的是,这些副本实际上 无状态 和 松散耦合 - 不共享任何内容。所有副本仍然可以与内存中服务或数据库通信,但有状态服务是它自己的独立服务(在微服务架构中)。
what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazelcast solution to this problem?
两者都是有效的解决方案。哪一个最适合您取决于什么库、协议或集成模式最适合您。
Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?
是的,非常好。或者,您可以使用像 Apache Kafka or Apache Pulsar
这样的分布式发布-订阅消息传递平台
我对微服务架构中的可扩展性有疑问:
Independent from the inter service communication style (REST HTTP or messsage based), if a service scales, which means several replicas of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?
我问这个问题是因为服务的所有实例之间的共享非内存数据库可能会减慢读写过程。
Could some expert in designing scalable system architecture explain, what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazlecast solution to this problem?
另一种可能的解决方案:使用 Rabbitmq 设计可扩展系统:
Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?
感谢您的帮助。
several instances of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?
你不知道。 无状态 工作负载通过添加更多副本进行扩展。重要的是,这些副本实际上 无状态 和 松散耦合 - 不共享任何内容。所有副本仍然可以与内存中服务或数据库通信,但有状态服务是它自己的独立服务(在微服务架构中)。
what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazelcast solution to this problem?
两者都是有效的解决方案。哪一个最适合您取决于什么库、协议或集成模式最适合您。
Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?
是的,非常好。或者,您可以使用像 Apache Kafka or Apache Pulsar
这样的分布式发布-订阅消息传递平台