- Cannot be a node that will host the Microsegmentation Console itself
- Workstation, bastion instance, or jump box
- Equipped with Docker 18.02 or later.
- Kubernetes 1.19 or later
- Must adhere to conformance testing
Kubernetes node groups
The installer expects the following nodes to exist with the listed labels for correct scheduling of the containers and proper high availability.
|MongoDB router and configuration||
||16 vCPU, 64GB||3||no|
||32 vCPU, 128GB||3 x number of data shards||no|
||32 vCPU, 128GB||3 x number of reports shards||no|
||16 vCPU, 64GB||3||yes|
||8 vCPU, 32 GB||2||no|
n represents an integer value.
Use this digit to number the shards sequentially, starting with
This setup is calibrated to handle at least 10K enforcers with 20 processing units, each generating 50 unique flows per interval.
A flow report consumes around 120B on disk.
Given the number of flow reports, you can plan your storage expansion.
Example: for a steady stream of flow reports, 33K per second (600 requests per second on the
/flowreports API), you can expect a storage of ~32TB per nodes for 90 days of retention (default).
Those reports are hosted on a separate MongoDB shard.
The disk used for the reports must be able to sustain ~2K input/output operations per second (IOPS) (majority of writes) for 33K flows per second (might spike to 4/5K). If the performances of ingestion is degrading and replication is starting to lag you need to add more IOPS.
Also keep in mind as the dataset is growing you can have issue with the maximum size of the volume provided by the storage provider (for instance AWS limit is 16TB). If you think you will go beyond that point you need to add more shards for the reports dataset.