System Performance Estimates and Recommendations for Large Scale Deployments

System Performance Estimates and Recommendations for Large Scale Deployments

This topic includes estimates and recommendations for storage capacity, disk performance, and network throughput for optimum performance of FortiSIEM deployments processing over 10,000 EPS.

In general, event ingestion at high EPS requires lower storage IOPS than for queries simply because queries need to scan higher volumes of data that has accumulated over time. For example, at 20,000 EPS, you have 86,400 times more data in a day than in one second, so a query such as ‘Top Event types by count for the past 1 day’ will need to scan 20,000 x 86,400 = ~ 1.72 billion events. Therefore, it is important to size your FortiSIEM cluster to handle your query and report requirements first, which will also handle event ingestion very well. These are the top 3 things to do for acceptable FortiSIEM query performance:

  1. Add more worker nodes, higher than what is required for event ingestion alone
  2. 10Gbps network on NFS server is a must, and if feasible on Supervisor and Worker nodes as well
  3. SSD Caching on NFS server – The size of the SSD should be as close to the size required to cache hot data. In typical customer scenarios, the last 1 month data can be considered hot data because monthly reports are quite commonly run


Schedule frequently run reports into the dashboard

If you have frequently run ranking reports that have group-by criteria (as opposed to raw message based reports), you can add such reports into a custom dashboard so that FortiSIEM schedules to run these reports in inline mode. Such reports compute their results in streaming manner as event data is processed in real-time. Such reports do not put any burden on the storage IOPS because they read very little data from the EventDB. Note that raw message reports (no group-by) are always computed directly from EventDB


An example scenario is presented at the end of this guide.




Estimates and Recommendations
Event Storage


Storage capacity estimates are based on an average event size of 64 compressed bytes x EPS (events per section). Browser Support and Hardware Requirements includes a table with storage capacity requirements for up to 10,000 EPS.
Root Disk


 Standard hard disk IOPS


1000 IOPS or more. Lab testing for EC2 scalability used 2000 IOPS.
SVN Disk


1000 IOPS

IOPS for



1000 IOPS for 100K EPS (minimum)
EventDB Read IOPS for Queries As high as feasible to improve query performance (use SSD caching on NFS server when feasible). In EC2 scalability testing, 2000 read IOPS while ingesting 100K EPS using one supervisor and two workers produced these results:

Index Query – No filter, display COUNT(Matched Events), group-by event type for 24 hours

1.  Total Events processed = 2,594,816,711 (2.59 billion events)

2.  Average events per second scanned by Query (QEPS) = 1.02 million QEPS

3.  Average Query Runtime = 2543 seconds (~ 42 minutes)

Raw Event Log Query – Same as Index Query with filter Raw Event Log contains ‘e’

1.           Total Events processed = 350,914,385 (350 million events)

2.           Average events per second scanned by Query (QEPS) = 179,909 EPS (179k QEPS) 3.  Average Query Runtime = 1950 seconds (~ 33 minutes)



Recommend 10Gbps network between Supervisor, Workers, and NFS server.

Using VMXNet3 Adapter for VMware

To achieve the best network throughput in VMware environments, delete the E1000 network adapter and add one that uses VMXNet3 for theeth0/eth1 network configuration. VMXNet3 adapter supports 10Gbps networking between VMs on the same host as well as across hosts, though you must also have a 10Gbps physical network adapter to achieve that level of throughput across hosts. You may need to upgrade the virtual hardware version (VM Ware KB 1003746) in order to have the ability to use VMXNet3. More details on different types of VMWare network adapters is available in VMWare KB 1001805

Achieving 10Gbps on AWS EC2

To achieve 10Gbps in the AWS EC2 environment, you will need to:

1.  Deploy FortiSIEM Super, Workers, and NFS server on 8xlarge class of instances (for example, c3.8xlarge ). Refer to EC2 Instance Types for available types, and look for instance types with 10 Gigabit noted next to them.

2.  You will need to use the HVM image for both the FortiSIEM image and NFS server image that supports enhanc ed networking.

3.  Supervisor, Workers, and NFS Server must be placed under the same AWS EC2 placement group within an AWS VPC.



FortiSIEM recommends the use of separate network interfaces for event ingestion/GUI access and storage data to NFS
Number of


6000 EPS per worker for event ingestion. More worker nodes for query performance. See example below.



An MSP customer has 12,000 EPS across all their customers. Each event takes up 64 bytes on average in compressed form in the EventDB.

These calculations are just extrapolations based on a test on EC2. Actual results may vary from this because of differences in hardware, event data, types of queries. Therefore, it is recommended that customers do a pilot evaluation using production data either on-premise or on AWS before arriving at an exact number of worker nodes

Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

This entry was posted in Administration Guides, FortiSIEM on by .

About Mike

Michael Pruett, CISSP has a wide range of cyber-security and network engineering expertise. The plethora of vendors that resell hardware but have zero engineering knowledge resulting in the wrong hardware or configuration being deployed is a major pet peeve of Michael's. This site was started in an effort to spread information while providing the option of quality consulting services at a much lower price than Fortinet Professional Services. Owns PacketLlama.Com (Fortinet Hardware Sales) and Office Of The CISO, LLC (Cybersecurity consulting firm).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.