Category Archives: Administration Guides

FortiSIEM Installing in VMware ESX

Installing in VMware ESX

Setting the Network Time Protocol (NTP) for ESX

Installing a Supervisor, Worker, or Collector Node in ESX

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

Configuring the Supervisor, Worker, or Collector from the VM Console

Setting the Network Time Protocol (NTP) for ESX

It’s important that your Virtual Appliance has the accurate time in order to correlate events from multiple devices within the environment.

  1. Log in to your VMWare ESX server.
  2. Select your ESX host server.
  3. Click the Configuration
  4. Under Software, select Time Configuration.
  5. Click Properties.
  6. Select NTP Client Enabled.
  7. Click Options.
  8. Under General, select Start automatically.
  9. Under NTP Setting, click ...
  10. Enter the IP address of the NTP servers to use.

 

  1. Click Restart NTP service.
  2. Click OK to apply the changes.
Installing a Supervisor, Worker, or Collector Node in ESX

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node is the same. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. See Configuring NFS Storage for VMware ESX Server for more information. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

When you’re finished with the specific hypervisor setup process, you need to complete your installation by following the steps described under Ge neral Installation.

 

 

 

 

Importing the Supervisor, Collector, or Worker Image into the ESX Server

  1. Download and uncompress the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Log in to the VMware vSphere Client.
  3. In the File menu, select Deploy OVF Template.
  4. Browse to the .ova file (example: FortiSIEM-VA-4.3.1.1145.ova) and select it.

On the OVF Details page you will see the product and file size information.

  1. Click Next.
  2. Click Accept to accept the “End User Licensing Agreement,” and then click Next.
  3. Enter a Name for the Supervisor or Worker, and then click Next.
  4. Select a Storage location for the installed file, and then click Next.

 

Running on VMWare ESX 6.0

If you are importing FortiSIEM VA, Collector, or Report Server images for VMWare on an ESXi 6.0 host, you will need to also “Upgrade VM Compatibility” to ESXi 6.0. If the VM is already started, you need to shutdown the VM, and use the “Actions” menu to do this. Due to some incompatibility created by VMWare, our collector VM processes restarted and the collector could not register with the supervisor. Similar problems are also likely to occur on supervisor, worker, or report server as well, so make sure their VM compatibilities are upgraded as well. More information about VM compatibility is available in the VMWare KB below:

https://kb.vmware.com/kb/1010675

Editing the Supervisor, Collector, or Worker Hardware Settings

Before you start the Supervisor, Worker, or Collector for the first time you need to make some changes to its hardware settings.

  1. In the VMware vSphere client, select the imported Supervisor, Worker, or Collector.
  2. Right-click on the node to open the Virtual Appliance Options menu, and then select Edit Settings… .
  3. Select the Hardware tab, and check that Memory is set to at least 16 GB and CPUs is set to 8.

Setting Local Storage for the Supervisor

Using NFS Storage

You can install the Supervisor using either native ESX storage or NFS storage. These instructions are for creating native EXS storage. See Configuring NFS Storage for VMware ESX Server for more information. If you are using NFS storage, you will set the IP address of the NFS server during Step 15 of the Configuring the Supervisor, Worker, or Collector from the VM Console process.

  1. On Hardware tab, click Add.
  2. In the Add Hardware dialog, select Hard Disk, and then click Next.
  3. Select Create a new virtual disk, and then click Next.
  4. Check that these selections are made in the Create a Disk dialog:
Disk Size 300GB

See the Hardware Requirements for Supervisor and Worker Nodes in the Browser Support and Hardware Requirements topic for more specific disk size recommendations based on Overall EPS.

Disk

Provisioning

Thick Provision Lazy Zeroed
Location Store to the Virtual Machine
  1. In the Advanced Options dialog, make sure that the Independent option for Mode is not selected.
  2. Check all the options for creating the virtual disk, and then click Finish.
  3. In the Virtual Machine Properties dialog, click OK. The Reconfigure virtual machine task will launch.

Troubleshooting Tips for Supervisor Installations

Check the  Supervisor System and Directory Level Permissions Check Backend System Health

Check the  Supervisor System and Directory Level Permissions

Use SSH to connect to the Supervisor and check that the cmdb, data, query, querywkr, and svn permissions match those in this table:

 

[root@super ~]# ls -l / dr-xr-xr-x.   2 root     root      4096 Oct 15 11:09 bin dr-xr-xr-x.   5 root     root      1024 Oct 15 14:50 boot drwxr-xr-x    4 postgres postgres  4096 Nov 10 18:59 cmdb drwxr-xr-x    9 admin    admin     4096 Nov 11 11:32 data drwxr-xr-x   15 root     root      3560 Nov 10 11:11 dev -rw-r–r–    1 root     root        34 Nov 11 12:09 dump.rdb drwxr-xr-x.  93 root     root     12288 Nov 11 12:12 etc drwxr-xr-x.   4 root     root      4096 Nov 10 11:08 home dr-xr-xr-x.  11 root     root      4096 Oct 15 11:13 lib dr-xr-xr-x.   9 root     root     12288 Nov 10 19:13 lib64 drwx——.   2 root     root     16384 Oct 15 14:46 lost+found drwxr-xr-x.   2 root     root      4096 Sep 23  2011 media drwxr-xr-x.   2 root     root      4096 Sep 23  2011 mnt drwxr-xr-x.  10 root     root      4096 Nov 10 09:37 opt drwxr-xr-x    2 root     root      4096 Nov 10 11:10 pbin dr-xr-xr-x  289 root     root         0 Nov 10 11:13 proc drwxr-xr-x    8 admin    admin     4096 Nov 11 00:37 query drwxr-xr-x    8 admin    admin     4096 Nov 10 18:58 querywkr dr-xr-x—.   7 root     root      4096 Nov 10 19:13 root dr-xr-xr-x.   2 root     root     12288 Oct 15 11:08 sbin drwxr-xr-x.   2 root     root      4096 Oct 15 14:47 selinux drwxr-xr-x.   2 root     root      4096 Sep 23  2011 srv drwxr-xr-x    4 apache   apache    4096 Nov 10 18:58 svn drwxr-xr-x   13 root     root         0 Nov 10 11:13 sys drwxrwxrwt.   9 root     root      4096 Nov 11 12:12 tmp drwxr-xr-x.  15 root     root      4096 Oct 15 14:58 usr drwxr-xr-x.  21 root     root      4096 Oct 15 11:01 var

 

Check that the /data , /cmdb, and /svn directory level permissions match those in this table:

 

[root@super ~]# ls -l /data drwxr-xr-x 3 root     root     4096 Nov 11 02:52 archive drwxr-xr-x 3 admin    admin    4096 Nov 11 12:01 cache drwxr-xr-x 2 postgres postgres 4096 Nov 10 18:46 cmdb drwxr-xr-x 2 admin    admin    4096 Nov 10 19:04 custParser drwxr-xr-x 5 admin    admin    4096 Nov 11 00:29 eventdb drwxr-xr-x 2 admin    admin    4096 Nov 10 19:04 jmxXml drwxr-xr-x 2 admin    admin    4096 Nov 11 11:33 mibXml

[root@super ~]# ls -l /cmdb drwx—— 14 postgres postgres  4096 Nov 10 11:08 data

[root@super ~]# ls -l /svn drwxr-xr-x 6 apache apache  4096 Nov 10 18:58 repos

 

Check Backend System Health

Use SSH to connect to the supervisor and run phstatus to see if the system status metrics match those in this table:

 

 

[root@super ~]# phstatus

Every 1.0s: /opt/phoenix/bin/phstatus.py

System uptime:  12:37:58 up 17:24,  1 user,  load average: 0.06, 0.01, 0.00

Tasks: 20 total, 0 running, 20 sleeping, 0 stopped, 0 zombie

Cpu(s): 8 cores, 0.6%us, 0.7%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st

Mem: 16333720k total, 5466488k used, 10867232k free, 139660k buffers

Swap: 6291448k total, 0k used, 6291448k free, 1528488k cached

PROCESS                  UPTIME         CPU%           VIRT_MEM       RES_MEM phParser                 12:00:34    0              1788m          280m phQueryMaster            12:00:34    0              944m           63m phRuleMaster             12:00:34    0              596m           85m phRuleWorker             12:00:34    0              1256m          252m phQueryWorker            12:00:34    0              1273m          246m phDataManager            12:00:34    0              1505m          303m phDiscover               12:00:34    0              383m           32m phReportWorker           12:00:34    0              1322m          88m phReportMaster           12:00:34    0              435m           38m phIpIdentityWorker       12:00:34    0              907m           47m phIpIdentityMaster       12:00:34    0              373m           26m phAgentManager           12:00:34    0              881m           200m phCheckpoint             12:00:34    0              98m            23m phPerfMonitor            12:00:34    0              700m           40m phReportLoader           12:00:34    0              630m           233m phMonitor                31:21       0              1120m          25m Apache                   17:23:23    0              260m           11m

Node.js                  17:20:54    0              656m           35m

AppSvr                   17:23:16    0              8183m          1344m

DBSvr                    17:23:34    0              448m           17m

 

 

Configuring the Supervisor, Worker, or Collector from the VM Console
  1. In the VMware vSphere client, select the Supervisor, Worker, or Collector virtual appliance 2. Right-click to open the Virtual Appliance Options menu, and then select Power > Power On.
  2. In the Virtual Appliance Options menu, select Open Console.
  3. In VM console, select Set Timezone and then press Enter.
  4. Select your Location, and then press Enter.
  5. Select your Country, and then press Enter.
  6. Select your Timezone, and then press Enter.
  7. Review your Timezone information, select 1, and then press Enter.
  8. When the Configuration screen reloads, select Login, and then press Enter.
  9. Enter the default login credentials.
Login root
Password ProspectHills
  1. Run the vami_config_net script to configure the network.

 

  1. When prompted, enter the the information for these network components to configure the Static IP address: IP Address, Netmask, Gate way, DNS Server(s).
  2. Enter the Host name, and then press Enter.
  3. For the Supervisor, set either the Local or NFS storage mount point.

For a Worker, use the same IP address of the NFS server you set for the Supervisor.

Supervisor Local storage /dev/sdd
NFS storage <NFS_Server_IP_Address>:/<Directory_Path>

 

After you set the mount point, the Supervisor will automatically reboot, and in 15 to 25 minutes the Supervisor will be successfully configured.

ISO Installation

These topics cover installation of FortiSIEM from an ISO under a native file system such as Linux, also known as installing “on bare metal.”  Installing a Collector on Bare Metal Hardware


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Installing in Microsoft Hyper-V

Installing in Microsoft Hyper-V

These topics describe how to install FortiSIEM on a Microsoft Hyper-V virtual server.

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Supported Versions

FortiSIEM has been tested to run on Hyper-V on Microsoft Windows 2012.

 

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Using Local or NFS Storage for EventDB in Hyper-V

Before you install an FortiSIEM virtual appliance in Hyper-V, you should decide whether you plan to use NFS storage or local storage to store event information in EventDB. If you decide to use a local disk, you can add a data disk of appropriate size. Typically, this will be named as /dev/sdd if it is the 4th disk. When using local disk, choose the type ‘Dynamically expanding’ (VHDX) format so that you are able to resize the disk if your EventDB will grow beyond the initial capacity.

If you are going to use NFS storage for EventDB, follow the instructions in the topic Configuring NFS Storage for VMware ESX Server.

Disk Formats for Data Storage

FortiSIEM virtual appliances in Hyper-V use dynamically expanding VHD disks for the root and CMDB partitions, and a dynamically expanding VHDX disk for EventDB. Dynamically expanding disks are used to keep the exported Hyper-V image within reasonable limits. See the Microsoft documentation topic Performance Tuning Guidelines for Windows Server 2012 (or R2) for more information.

  1. Download and uncompress the the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Start Hyper-V Manager.
  3. In the Action menu, select Import Virtual Machine.

The Import Virtual Machine Wizard will launch.

  1. Click Next.
  2. Browse to the folder containing the OVA package, and then click Next.
  3. Select the FortiSIEM image, and then click Next.
  4. For Import Type, select Copy the virtual machine, and then click
  5. Select the storage folders for your virtual machine files, and then click Next.
  6. Select the storage folder for your virtual machine’s hard disks, and then click Next.
  7. Verify the installation configuration, and then click Finish.
  8. In Hyper-V Manager, connect to the FortiSIEM virtual appliance and power it on.
  9. Follow the instructions in Configuring the Supervisor, Worker, or Collector from the VM Console to complete the installation.

Related Links

Configuring the Supervisor, Worker, or Collector from the VM Console


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Installing in Linux KVM

Installing in Linux KVM

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node in Linux KVM is the same as installing these nodes under VMware ESX, and so you should follow the instructions in Installing a Supervisor, Worker, or Collector Node in ESX. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Setting up a Network Bridge for Installing AccelOps in KVM

Importing the Supervisor, Collector, or Worker Image into KVM Configuring Supervisor Hardware Settings in KVM

Setting up a Network Bridge for Installing AccelOps in KVM

If FortiSIEM is the first guest on KVM, then a bridge network may be required to enable network connectivity. For details see the KVM documentation provided by IBM.

In these instructions, br0 is the initial bridge network, em1 is connected as a management network, and em4 is connected to your local area network.

  1. In the KVM host, go to the directory /etc/sysconfig/network-scripts/.
  2. Create a bridge network config file ifcfg-br0.

 

DEVICE=br0

BOOTPROTO=none

NM_CONTROLLED=yes

ONBOOT=yes

TYPE=Bridge

NAME=”System br0″

  1. Edit network config file ifcfg-em4.

 

DEVICE=em4

BOOTPROTO=shared

NM_CONTROLLED=no

ONBOOT=yes

TYPE=Ethernet

UUID=”24078f8d-67f1-41d5-8eea-xxxxxxxxxxxx”

IPV6INIT=no

USERCTL=no

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

NAME=”System em4″

HWADDR=F0:4D:00:00:00:00 BRIDGE=br0

  1. Restart the network service.
Importing the Supervisor, Collector, or Worker Image into KVM
  1. Download and uncompress the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Start the KVM Virtual Machine Manager.
  3. Select and right-click on a host to open the Host Options menu, and then select New.
  4. In the New VM dialog, enter a Name for your FortiSIEM node.
  5. Select Import existing disk image, and then click Forward.
  6. Browse to the location of OVA package and select it.
  7. Choose the OS Type and Version you want to use with this installation, and then click Forward.
  8. Allocate Memory and CPUs to the FortiSIEM node as recommended in the topic Browser Support and Hardware Requirements, and then click Forward.
  9. Confirm the installation configuration of your node, and then click Finish.
Configuring Supervisor Hardware Settings in KVM
  1. In KVM Virtual Machine Manager, select the FortiSIEM Supervisor, and then click Open.
  2. Click the Information icon to view the Supervisor hardware settings.
  3. Select the Virtual Network Interface.
  4. For Source Device, select an available bridge network.

See Setting up a Network Bridge for Installing FortiSIEM in KVM for more information.

  1. For Device model, select Hypervisor default, and then click Apply.
  2. In the Supervisor Hardware settings, select Virtual Disk.
  3. In the Virtual Disk dialog, open the Advanced options, and for Disk bus, select IDE.
  4. Click Add Hardware, and then select Storage.
  5. Select the Select managed or other existing storage option, and then browse to the location for your storage.

You will want to set up a disk for both CMDB (60GB) and SVN (60GB). If you are setting up FortiSIEM Enterprise, you may also want to create a storage disk for EventDB, with Storage format set to Raw.

  1. In the KVM Virtual Machine Manager, connect to the FortiSIEM Supervisor and power it on.
  2. Follow the instructions in Configuring the Supervisor, Worker, or Collector from the VM Console to complete the installation.

Related Links

Configuring the Supervisor, Worker, or Collector from the VM Console


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

Hypervisor Installations

Hypervisor Installations

Topics in this section cover the instructions for importing the AccelOps disk image into specific hypervisors and configuring the AccelOps virtual appliance. See the topics under General Installation for information on installation tasks that are common to all hypervisors.

Installing in Amazon Web Services (AWS)

Determining the Storage Type for EventDB in AWS

Configuring Local Storage in AWS for EventDB

Setting Up Supervisor, Worker and Collector Nodes in AWS

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

Setting up a Network Bridge for Installing AccelOps in KVM

Importing the Supervisor, Collector, or Worker Image into KVM Configuring Supervisor Hardware Settings in KVM

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Setting the Network Time Protocol (NTP) for ESX

Installing a Supervisor, Worker, or Collector Node in ESX

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

Configuring the Supervisor, Worker, or Collector from the VM Console

Installing in Amazon Web Services (AWS)

You Must Use an Amazon Virtual Public Cloud with AccelOps

You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they are stopped and started.

Using NFS Storage with Amazon Web Services

If the aggregate EPS for your FortiSIEM installation requires a cluster (an FortiSIEM virtual appliance + worker nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more information, see Setting Up NFS Storage in AWS.

 

Determining the Storage Type for EventDB in AWS

Configuring Local Storage in AWS for EventDB

Setting Up Supervisor, Worker and Collector Nodes in AWS

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

Note: SVN password reset issue after system reboot for FortiSIEM 3.7.6 customers in AWS Virtual Private Cloud (VPC)

FortiSIEM uses SVN to store monitored device configurations. In AWS VPC setup, we have noticed that FortiSIEM SVN password gets changed if the system reboots – this prevents FortiSIEM from storing new configuration changes and viewing old configurations. The following procedure can be used to reset the SVN password to FortiSIEM factory default so that FortiSIEM can continue working correctly.

This script needs to be run only once.

  1. Logon to Super
  2. Copy the attached “ao_svnpwd_reset.sh” script to Super on EC2+VPC deployment
  3. Stop all backend processes before running script by issuing the following command: phtools –stop all
  4. Run following command to change script permissions: “chmod +x ao_svnpwd_reset.sh”
  5. Execute “ao_svnpwd_reset.sh” as root user: “./ao_svnpwd_reset.sh”
  6. The system will reboot
  7. Check SVN access to make sure that old configurations can be viewed.
Determining the Storage Type for EventDB in AWS

If the aggregate EPS for your FortiSIEM installation requires a cluster (a virtual appliance +  Worker nodes), then you must set up an NFS server as described in Using NFS Storage with Amazon Web Services. If your storage requirement for EventDB is more than 1TB, it is recommended that you use an NFS server where you can configure LVM+RAID0, which is also described in those topics. Although it is possible to set up a similar LVM+RAID0 on the FortiSIEM virtual appliance itself, this has not been tested.

Here’s an example of how to calculate storage requirements: At 5000 EPS, you can calculate daily storage requirements to be about 22-30GB (300k events take roughly 15-20MB on average in compressed format stored in eventDB). So, in order to have 6 months of data available for querying, you need to have 4 – 6TB of storage.

If you only need one FortiSIEM node and your storage requirements are lower than 1TB, and is not expected to ever grow beyond this limit, you can avoid setting up an NFS server and use a local EBS volume for EventDB. For this option, see the topic Configuring Local Storage in AWS for EventDB.

Configuring Local Storage in AWS for EventDB

Create the Local Storage Volume

Attach the Local Storage Volume to the Supervisor

Create the Local Storage Volume

  1. Log in to AWS.
  2. In the E2 dashboard, click Volumes.
  3. Click Create Volume.
  4. Set Size to 100 GB to 1 TB (depending on storage requirement).
  5. Select the same Availability Zone region as the FortiSIEM Supervisor instance.
  6. Click Create.

Attach the Local Storage Volume to the Supervisor

  1. In the EC2 dashboard, select the local storage volume.
  2. In the Actions menu, select Attach Volume.
  3. For Instance, enter the Supervisor ID.
  4. For Device, enter /dev/xvdi.
  5. Click Attach.

 

Setting Up Supervisor, Worker and Collector Nodes in AWS

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node is the same. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. See Configuring NFS Storage for VMware ESX Server for more information. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

When you’re finished with the specific hypervisor setup process, you need to complete your installation by following the steps described under Ge neral Installation.

You Must Use an Amazon Virtual Public Cloud with AccelOps

You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they are stopped and started.

Using NFS Storage with Amazon Web Services

If the aggregate EPS for your FortiSIEM installation requires a cluster (an FortiSIEM virtual appliance + worker nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more information, see Setting Up NFS Storage in AWS.

Setting Up AWS Instances

You Must Use an Amazon Virtual Public Cloud with AccelOps

You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they are stopped and started.

Using NFS Storage with Amazon Web Services

If the aggregate EPS for your FortiSIEM installation requires a cluster (an FortiSIEM virtual appliance + worker nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more information, see Setting Up NFS Storage in AWS.

  1. Log in to your AWS account and navigate to the EC2 dashboard.
  2. Click Launch Instance.
  3. Click Community AMIs and search for the AMI ID associated with your version of FortiSIEM. The latest AMI IDs are on the image server where you download the other hypervisor images.
  4. Click Select.
  5. Click Compute Optimized.

Using C3 Instances

You should select one of the C3 instances with a Network Performance rating of High, or 10Gb performance. The current generation of C3 instances run on the latest Intel Xeons that AWS provides. If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as opposed to on-demand instances.

  1. Click Next: Configure Instance Details.
  2. Review these configuration options:
Network and Subnet Select the VPC you set up for your instance.
Number of

Instances

For enterprise deployments, set to 1. For a configuration of 1 Supervisor + 2 Workers, set to 3. You can also add instances later to meet your needs.
Public IP Clear the option Automatically assign a public IP address to your instances if you want to use VPN.
Placement

Group

A placement group is a logical grouping for your cluster instances. Placement groups have low latency, full-bisection 10Gbps bandwidth between instances. Select an existing group or create a new one.
EBS

Optimized

Instance

An EBS optimized instance enables dedicated throughput between Amazon EBS and Amazon EC2, providing improved performance for your EBS volumes. Note that if you select this option, additional Amazon charges may apply.
  1. Click Next: Add Storage.
  2. For Size, Volume Type, and IOPS, set options for your configuration.
  3. Click Next: Tag Instance.
  4. Under Value, enter the Name you want to assign to all the instances you will launch, and then click Create Tag.

After you complete the launch process, you will have to rename each instance to correspond to its role in your configuration, such as

Supervisor, Worker1, Worker2.

  1. Click Next: Configure Security Group.
  2. Select Select an Existing Security Group, and then select the default security group for your VPC.

FortiSIEM needs access to HTTPS over port 443 for GUI and API access,  and access to SSH over port 22 for remote management, which are set in the default security group. This group will allow traffic between all instances within the VPC.

  1. Click Review and Launch.
  2. Review all your instance configuration information, and then click Launch.
  3. Select an existing or create a new Key Pair to connect to these instances via SSH.

If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the private key and store it in a secure location accessible from the machine from where you usually connect to these AWS instances.

  1. Click Launch Instances.
  2. When the EC2 Dashboard reloads, check that all your instances are up and running.
  3. All your instances will be tagged with the Name you assigned in Step 11, select an instance to rename it according to its role in your deployment.
  4. For all types of instances, follow the instructions to SSH into the instances as described in Configuring the Supervisor and Worker Nodes in AWS, and then run the script sh to check the health of the instances.

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

You need to create VPC-based Elastic IPs and attach them to your nodes so the public IPs don’t change when you stop and start instances.

  1. Log in to the Amazon VPC Console.
  2. In the navigation pane, click Elastic IPs.
  3. Click Allocate New Address.
  4. In the Allocate New Address dialog box, in the Network platform list, select EC2-VPC, and then click Yes, Allocate.
  5. Select the Elastic IP address from the list, and then click Associate Address.
  6. In the Associate Address dialog box, select the network interface for the NAT instance. Select the address to associate the EIP with from the Private IP address list, and then click Yes, Associate.

Configuring the Supervisor and Worker Nodes in AWS

  1. From the EC2 dashboard, select the instance, and then click Connect.
  2. Select Connect with a standalone SSH client, and follow the instructions for connecting with an SSH client.

For the connection command, follow the example provided in the connection dialog, but substitute the FortiSIEM root user name for ec2user@xxxxxx. The ec2-user .name is used only for Amazon Linux NFS server.

  1. SSH to the Supervisor.
  2. Run cd /opt/phoenix/deployment/jumpbox/aws.
  3. Run the script pre-deployment.sh to configure host name and NFS mount point.
  4. Accept the License Agreements.
NFS Storage <NFS Server IP>:/data

For <NFS Server IP>, use the 10.0.0.X IP address of the NFS Server running within the VPC

Local Storage /dev/xvdi
  1. The system will reboot.
  2. Log in to the Supervisor.
  3. Register the Supervisor by following steps in
  4. Run cd /opt/phoenix/deployment/jumpbox/aws.
  5. Run the script sh (now includes running post-deployment.sh automatically).
  6. The system will reboot and is now ready.
  7. To install a worker node, follow steps 1-9 and the worker is ready
  8. To add a Worker to the cluster (assume Worker is already installed)
    1. Log in to the FortiSIEM GUI
    2. Go to Admin > License Management > VA Information
    3. Click Add
    4. Enter the private address of the Worker Node

Registering the Collector to the Supervisor in AWS

  1. Locate a Windows machine on AWS.
  2. Open a Remote desktop session from your PC to that Windows machine on AWS.
  3. Within the remote desktop session, launch a browser and navigate to https://<Collector-IP>:5480
  4. Enter the Collector setup information.
Name Collector Name
User ID Admin User
Password Admin Password
Cust/Org ID Organization Name
Cloud URL Supervisor URL
  1. Click

The Collector will restart automatically after registration succeeds.


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

Browser Support and Hardware Requirements

Browser Support and Hardware Requirements

Supported Operating Systems and Browsers

Hardware Requirements for Supervisor and Worker Nodes

Hardware Requirements for Collector Nodes

Hardware Requirements for Report Server Nodes

Supported Operating Systems and Browsers

These are the browsers and operating systems that are supported for use with the FortiSIEM web client.

OS Supported Browsers Supported
 Windows Firefox, Chrome, Internet Explorer 11.x, Microsoft Edge
Mac OS X Firefox, Chrome, Safari
Linux Firefox, Chrome

 

Hardware Requirements for Supervisor and Worker Nodes

The FortiSIEM Virtual Appliance can be installed using either storage configured within the ESX server or NFS storage. See the topic Configuring NFS Server for more information on working with NFS storage.

Event Data Storage Requirements

The storage requirement shown in the Event Data Storage column is only for the eventdb data, but the /data partition also includes CMDB backups and queries. You should set the /data partition to a larger amount of storage to accommodate for this.

Encryption for Communication Between FortiSIEM Virtual Appliances

All communication between Collectors that are installed on-premises and FortiSIEM Supervisors and Workers is secured by TLS 1.2 encryption. Communications are managed by OpenSSL/Apache  HTTP Server/mod_ssl on the Supervisor/Worker side, and libcurl, using the NSS library for SSL, on the Collector side.The FortiSIEM Supervisor/Workers use RSA certificate with 2048 bits as default.

 

You can control the exact ciphers used for communications between virtual appliances by editing the SSLCipherSuite section in the file /etc/httpd/conf.d/ssl.conf on FortiSIEM Supervisors and Workers. You can test the ciphersuite for your Super or worker using the following nmap command:

nmap –script ssl-cert,ssl-enum-ciphers -p 443 <super_or_worker_fqdn>

Calculating Events per Second (EPS) and Exceeding the License Limit

AccelOps calculates the EPS for your system using a counter that records the total number of received events in a three minute time interval. Every second, a thread wakes up and checks the counter value. If the counter is less than 110% of the license limit (using the calculation 1.1 x EPS License x 180) , then AccelOps will continue to collect events. If you exceed 110% of your licensed EPS, events are dropped for the remainder of the three minute window, and an email notification is triggered. At the end of the three minute window the counter resets and resumes receiving events.

Overall EPS Quantity Host SW Processor Memory OS/App and CMDB Storage Event Data Storage

(1 year)

1,500 1 ESXi (4.0 or later preferred) 4 Core 3 GHz, 64 bit 16 GB

24 GB

(4.5.1+)

200GB (80GB OS/App, 60GB CMDB, 60G

B SVN)

3 TB
4,500 1 ESXi (4.0 or later preferred) 4 Core 3 GHz, 64 bit 16 GB

24 GB

(4.5.1+)

200GB (80GB OS/App, 60GB CMDB, 60G

B SVN)

8 TB
7,500 1 Super

1 Worker

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

12 TB
10,000 1 Super

1 Worker

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

17 TB
20,000 1 Super

3 Workers

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

34 TB
30,000 1 Super

5 Workers

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

50 TB
Higher than

30,000

Consult

FortiSIEM

         
Hardware Requirements for Collector Nodes
Component Quantity Host SW Processor Memory OS/App Storage
Collector 1 ESX 2 Core 2 GHz, 64 bit 4 GB 40 GB
Collector 1 Native Linux

Suggested Platform: Dell PowerEdge R210 Rack Server

2 Core, 64 bit 4GB 40 GB
Hardware Requirements for Report Server Nodes
Component Quantity Host

SW

Processor Memory OS/App Storage Reports Data Storage (1 year)
Report

Server

1 ESX 8 Core 3

GHz, 64 bit

16 GB 200GB (80GB OS/App, 60GB

CMDB, 60GB SVN)

See recommendations under Hardware Requirements for

Supervisor and Worker nodes

 

 

 

Information Prerequisites for All FortiSIEM Installations

You should have this information ready before you begin installing the FortiSIEM virtual appliance on ESX:

  1. The static IP address and subnet mask for your FortiSIEM virtual appliance.
  2. The IP address of NFS mount point and NFS share name if using NFS storage. See the topics Configuring NFS Storage for VMware ESX Server and Setting Up NFS Storage in AWS for more information.
  3. The FortiSIEM host name within your local DNS server.
  4. The VMWare ESX datastore location where the virtual appliance image will be stored if using ESX storage.

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

System Performance Estimates and Recommendations for Large Scale Deployments

System Performance Estimates and Recommendations for Large Scale Deployments

This topic includes estimates and recommendations for storage capacity, disk performance, and network throughput for optimum performance of FortiSIEM deployments processing over 10,000 EPS.

In general, event ingestion at high EPS requires lower storage IOPS than for queries simply because queries need to scan higher volumes of data that has accumulated over time. For example, at 20,000 EPS, you have 86,400 times more data in a day than in one second, so a query such as ‘Top Event types by count for the past 1 day’ will need to scan 20,000 x 86,400 = ~ 1.72 billion events. Therefore, it is important to size your FortiSIEM cluster to handle your query and report requirements first, which will also handle event ingestion very well. These are the top 3 things to do for acceptable FortiSIEM query performance:

  1. Add more worker nodes, higher than what is required for event ingestion alone
  2. 10Gbps network on NFS server is a must, and if feasible on Supervisor and Worker nodes as well
  3. SSD Caching on NFS server – The size of the SSD should be as close to the size required to cache hot data. In typical customer scenarios, the last 1 month data can be considered hot data because monthly reports are quite commonly run

 

Schedule frequently run reports into the dashboard

If you have frequently run ranking reports that have group-by criteria (as opposed to raw message based reports), you can add such reports into a custom dashboard so that FortiSIEM schedules to run these reports in inline mode. Such reports compute their results in streaming manner as event data is processed in real-time. Such reports do not put any burden on the storage IOPS because they read very little data from the EventDB. Note that raw message reports (no group-by) are always computed directly from EventDB

 

An example scenario is presented at the end of this guide.

System

Performance

Component

Estimates and Recommendations
Event Storage

Capacity

Storage capacity estimates are based on an average event size of 64 compressed bytes x EPS (events per section). Browser Support and Hardware Requirements includes a table with storage capacity requirements for up to 10,000 EPS.
Root Disk

IOPS

 Standard hard disk IOPS
CMDB Disk

IOPS

1000 IOPS or more. Lab testing for EC2 scalability used 2000 IOPS.
SVN Disk

IOPS

1000 IOPS
EventDB

IOPS for

Event

Ingestion

1000 IOPS for 100K EPS (minimum)
EventDB Read IOPS for Queries As high as feasible to improve query performance (use SSD caching on NFS server when feasible). In EC2 scalability testing, 2000 read IOPS while ingesting 100K EPS using one supervisor and two workers produced these results:

Index Query – No filter, display COUNT(Matched Events), group-by event type for 24 hours

1.  Total Events processed = 2,594,816,711 (2.59 billion events)

2.  Average events per second scanned by Query (QEPS) = 1.02 million QEPS

3.  Average Query Runtime = 2543 seconds (~ 42 minutes)

Raw Event Log Query – Same as Index Query with filter Raw Event Log contains ‘e’

1.           Total Events processed = 350,914,385 (350 million events)

2.           Average events per second scanned by Query (QEPS) = 179,909 EPS (179k QEPS) 3.  Average Query Runtime = 1950 seconds (~ 33 minutes)

Network

Throughput

Recommend 10Gbps network between Supervisor, Workers, and NFS server.

Using VMXNet3 Adapter for VMware

To achieve the best network throughput in VMware environments, delete the E1000 network adapter and add one that uses VMXNet3 for theeth0/eth1 network configuration. VMXNet3 adapter supports 10Gbps networking between VMs on the same host as well as across hosts, though you must also have a 10Gbps physical network adapter to achieve that level of throughput across hosts. You may need to upgrade the virtual hardware version (VM Ware KB 1003746) in order to have the ability to use VMXNet3. More details on different types of VMWare network adapters is available in VMWare KB 1001805

Achieving 10Gbps on AWS EC2

To achieve 10Gbps in the AWS EC2 environment, you will need to:

1.  Deploy FortiSIEM Super, Workers, and NFS server on 8xlarge class of instances (for example, c3.8xlarge ). Refer to EC2 Instance Types for available types, and look for instance types with 10 Gigabit noted next to them.

2.  You will need to use the HVM image for both the FortiSIEM image and NFS server image that supports enhanc ed networking.

3.  Supervisor, Workers, and NFS Server must be placed under the same AWS EC2 placement group within an AWS VPC.

Network

Interfaces

FortiSIEM recommends the use of separate network interfaces for event ingestion/GUI access and storage data to NFS
Number of

Workers

6000 EPS per worker for event ingestion. More worker nodes for query performance. See example below.

 

Example:

An MSP customer has 12,000 EPS across all their customers. Each event takes up 64 bytes on average in compressed form in the EventDB.

These calculations are just extrapolations based on a test on EC2. Actual results may vary from this because of differences in hardware, event data, types of queries. Therefore, it is recommended that customers do a pilot evaluation using production data either on-premise or on AWS before arriving at an exact number of worker nodes


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Installation

Installation

Additional Information in the Help Center

You can find additional information about installation, upgrades, and license management for your AccelOps deployment in the Installati on, Upgrades, and Licenses section of the Help Center maintained by AccelOps Support.

The topics in this section are intended to guide you through the basic process of setting up and configuring your AccelOps deployment. This includes downloading and installing the AccelOps OVA image, using your hypervisor virtual machine manager to configure the hardware settings for your AccelOps node, setting up basic configurations on your Supervisor node, and registering your Supervisor and other nodes. Setting up IT infrastructure monitoring, including device discovery, monitoring configuration, setting up business services, is covered in under the section Confi guring Your AccelOps Platform.

What You Need to Know before You Begin Installation What Kind of Deployment Will You Set Up?

Who Will Install and Configure AccelOps?

What Information Do You Need to Get Started? The Basic Installation Process

What You Need to Know before You Begin Installation

What Kind of Deployment Will You Set Up?

Before beginning installation you should have determined the exact deployment configuration you will follow, as described in the topics under Dep loyment Options. Note that many deployment options have particular hardware requirements. For example, if you intend to use an NFS server for a cluster deployment, or if want to use Visual Analytics, you will need to make sure that you have the necessary hardware and network components in place. We strongly recommend that you read through all the installation topics for your deployment configuration before you begin.

Who Will Install and Configure AccelOps?

These topics assume that you have the basic system administration skills required to install AccelOps, and that you are already familiar with the use of hypervisors such as VMware ESX or, if you are setting up a Cloud deployment, that you are already familiar with Cloud environments such as Amazon Web Services.

What Information Do You Need to Get Started?

You will need to have administrator-level permissions on the host where you will download and install AccelOps, and you will also need to have username and password associated with your AccelOps license. If you intend to use NFS storage for event data, you will also need to have set up an NFS server prior to installation.

The Basic Installation Process

The installation process for any AccelOps deployment consists of a few steps:

Import the AccelOps virtual appliance into a hypervisor or Amazon Web Services environment

Edit the virtual appliance hardware settings

Start and configure the virtual appliance from the hypervisor console

Register the virtual appliance

Topics in this section will take you through the specific installation and configuration instructions for the most popular hypervisors and deployment configurations.

System Performance Estimates and Recommendations for Large Scale Deployments

Browser Support and Hardware Requirements

Information Prerequisites for All FortiSIEM Installations

Hypervisor Installations

Installing in Amazon Web Services (AWS)

Determining the Storage Type for EventDB in AWS

Configuring Local Storage in AWS for EventDB

Setting Up Supervisor, Worker and Collector Nodes in AWS

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

Setting up a Network Bridge for Installing AccelOps in KVM

Importing the Supervisor, Collector, or Worker Image into KVM Configuring Supervisor Hardware Settings in KVM

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Setting the Network Time Protocol (NTP) for ESX

Installing a Supervisor, Worker, or Collector Node in ESX

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

Configuring the Supervisor, Worker, or Collector from the VM Console

ISO Installation

Installing a Collector on Bare Metal Hardware

General Installation

Configuring Worker Settings

Registering the Supervisor

Registering the Worker

Registering the Collector to the Supervisor

Using NFS Storage with AccelOps

Configuring NFS Storage for VMware ESX Server

Using NFS Storage with Amazon Web Services

Setting Up NFS Storage in AWS

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

Moving CMDB to a separate Database Host

FortiSIEM Windows Agent and Agent Manager Install

FortiSIEM Windows Agent Pre-installation Notes

Installing FortiSIEM Windows Agent Manager

Installing FortiSIEM Windows Agent

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

Matrix of Multi-Tenancy Deployment Configuration Options

Matrix of Multi-Tenancy Deployment Configuration Options

This matrix shows the components required for the each multi-tenancy deployment option.

Deployment Option Supervisor

Node

Worker

Node

Collector

Node

NFS

Server

Report

Server

Visual

Analytics

Server

Description
Single Multi-Tenant

Supervisor Node

        x           This is the most basic single site multi-tenant deployment, primarily suitable for hosting providers. Organizations are created by splitting up the IP address space.
Multi-Tenant Supervisor

Node Collectors with

        x          x       This is a service provider deployment covering multiple sites. Data collection is simplified by deploying a collector for the satellite sites. You can add organizations by assigning a collector to an organization, or by splitting up the IP address space.
Multi-Tenant Cluster         x         x        x     This is a scalable service provider deployment suitable for deployments with large compute and storage needs. An NFS Server is required in the data sharing architecture between Supervisor and Worker nodes. Organi zations are created by splitting up the IP address space.
Multi-Tenant Cluster with

Collectors

        x         x        x      x     This deployment adds collectors to the configuration and is the most comprehensive service provider deployment. You can add organizations by assigning a collector to an organization, or by splitting up the IP address space.
Multi-Tenant Supervisor

Node with Tableau Visual

Analytics

        x          x  x This is the most basic single site multi-tenant deployment, with added capability for Visual Analytics with Tableau.
Multi-Tenant Supervisor Node with Collectors and T ableau Visual Analytics         x          x      x  x This is a service provider deployment covering multiple sites, with added capability for Visual Analytics with Tableau. Data collection is simplified by deploying a collector for the satellite sites.
Multi-Tenant Cluster with T ableau Visual Analytics         x         x        x    x  x This is a scalable service provider deployment ,with added capability for Visual Analytics with Tableau. An NFS Server is required in the data sharing architecture between Supervisor and Worker nodes.
Multi-Tenant Cluster with

Collectors and Tableau

Visual Analytics

        x         x        x      x    x  x This deployment adds collectors to the configuration and is the most comprehensive service provider deployment, with added capability for Visual Analytics with Tableau.

 

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!