Category Archives: Administration Guides

Migrating an ESX NFS-based Deployment in Place

Migrating an ESX NFS-based Deployment in Place

Overview

In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production 3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you might want to take a snapshot of CMDB to use as a backup if needed.

The steps for this process are:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance

This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

Migrating an ESX Local Disk-based Deployment Using an rsync Tool

Migrating an ESX Local Disk-based Deployment Using an rsync Tool

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run the 4.2.1 version on a different physical machine as the 3.7.x version. This process requires these steps:

Overview

Prerequisites

Copy the 3.7.x CMDB to a 4.2.1 Virtual Appliance Using rsync

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

  1. Log in to the 4.2.1 virtual appliance as root.
  2. Check the disk size in the remote system to make sure that there is enough space for the database to be copied over.
  3. Copy the directory /data from the 3.7.x virtual appliance to the 4.2.1 virtual appliance using the rsync tool.
  4. After copying is complete, make sure that the size of the event database is identical to the 3.7.x system.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Migrating VMware ESX-based Deployments

Migrating VMware ESX-based Deployments

The options for migrating VMware ESX deployments depend on whether you are using NFS for storage, and whether you choose to migrate in-place, or by using a staging system or rsync. Using the staging system requires more hardware, but minimizes downtime and CMDB migration risk compared to the in-place approach. The rsync method takes longer to complete because the event database has to be copied. If you use the i n-place method, then we strongly recommend that you take snapshots of the CDMB for recovery.

 

Migrating an ESX Deployment with Local Storage in Place

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x version, but as a new virtual machine. This process requires these steps:

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Use More Storage for Your 4.2.1 Virtual Appliance

Install the 4.2.1 virtual appliance on the same host as the 3.7.x version with a local disk that is larger than the original 3.7.x version. You will need the extra disk space for copying operations during the migration.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Removing the Local Disk from the 3.7.x Virtual Appliance

  1. Log in to your vSphere client.
  2. Select your 3.7.x virtual appliance and power it off.
  3. Open the Hardware properties for your virtual appliance.
  4. Select Hard disk 3, and then click Remove.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Adding the Local Disk to the 4.2.1 Virtual Appliance

  1. Log into your vSphere client.
  2. Select your 4.2.1 virtual appliance and power it off.
  3. Go the Hardware settings for your virtual appliance and select Hard disk 3.
  4. Click Remove.
  5. Click Add.
  6. For Device Type, select Hard Disk, and then click Next.
  7. Select Use an existing virtual disk, and then click Next.
  8. Browse to the location of the migrated virtual disk that was created by the migration script, and then click OK.
  9. Power on the virtual appliance.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

 

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Windows Agent Pre-installation Notes

FortiSIEM Windows Agent Pre-installation Notes

Hardware and Software Requirements Windows Agents

Windows Agent Manager

Supported versions

Windows Agent

Windows Agent Manager

Communication Ports between Agent and Agent Manager

Licensing

When you purchase the Windows Agent Manager, you also purchase a set number of licenses that can be applied to the Windows devices you are monitoring. After you have set up and configured Windows Agent Manager, you can see the number of both Basic and Advanced licenses that are available and in use in your deployment by logging into your Supervisor node and going to Admin > License Management, where you will see an entry for Basic Windows Licenses Allowed/Used and Advanced Windows Licenses Allowed/Used. You can see how these licenses have been applied by going to Admin > Windows Agent Health. When you are logged into the Windows Agent Manager you can also see the number of available and assigned licenses on the Assign Licenses to Users page.

There are two types of licenses that you can associate with your Windows agent.

License

Type

Description
None An agent has been installed on the device, but no license is associated with it. This device will not be monitored until a license is applied to it.
Advanced The agent is licensed to monitor all activity on the device, including logs, installed software changes, and file/folder changes
Basic The agent is licensed to monitor only logs on the device

When applying licenses to agents, keep in mind that Advanced includes Basic, so if you have purchased a number of Advanced licenses, you could use all those licenses for the Basic purpose of monitoring logs.. For example, if you have purchased a total of 10 licenses, five of which are Advanced and five of which are Basic, you could apply all 10 licenses to your devices as Basic.

Feature License Type
Windows Security Logs Basic
Windows Application Logs Basic
Windows System Logs Basic
Windows DNS Logs Basic
Windows DHCP Logs Basic
IIS logs Basic
DFS logs Basic
Any Windows Log File Basic
Custom file monitoring Basic
File Integrity Monitoring Advanced
Installed Software Change Monitoring Advanced
Registry Change Monitoring Advanced
WMI output Monitoring Advanced
Power shell Output Monitoring Advanced
Hardware and Software Requirements

Windows Agents

Component Requirement Notes
CPU x86 or x64 (or compatible) at 2Ghz or higher  
Hard Disk 10 GB (minimum)  
Server OS Windows XP-SP3 and above

(Recommended)

 
Desktop OS Windows 7/8 Performance issues may occur due to limitations of desktop OS
RAM 1 GB for XP

2+GB for Windows Vista & above

/ Windows Server

 
Installed

Software

.NET Framework 4.0 PowerShell 2.0 or higher .NET Framework 4.0 can be downloaded from http://www.microsoft.com/enus/download/details.aspx?id=17718)

You can download PowerShell from Microsoft at http://www.microsoft.com/e n-us/download/details.aspx?id=4045.

Windows OS

Language

English  

Windows Agent Manager

Each Manager has been tested to handle up to 500 agents at an aggregate 7.5K events/sec.

Component Requirement Notes
CPU x86 or x64 (or compatible) at 2Ghz or higher  
Hard Disk 10 GB (minimum)  
Server OS Windows Server 2008 and above (Strongly recommended)  
Desktop OS Windows 7/8 (performance issues might occur) Performance issues may occur due to limitations of desktop OS
RAM For 32 bit OS, 2 GB for Windows 7 / 8 is a minimum

For 64 bit OS, 4 GB for Windows 7/8 and Windows Server 2008 / 2012 is a

minimum

 
Installed

Software

.NET Framework 4.5

SQL Server Express or SQL Server 2012

installed using “SQL Server Authentication Mode”

Power Shell 2.0 or higher

IIS 7 or higherinstalled

IIS 7, 7.5: ASP .NET feature must be enabled from Application Development Role Service of IIS  IIS 8.0+: ASP .NET 4.5 feature must be enabled from Application Development Role Service of IIS

.NET Framework 4.5 can be downloaded from http://www.microsoft.com/e

n-us/download/details.aspx?id=30653, and is already available on

Windows 8 and Windows Server 2012

You can download PowerShell from Microsoft at http://www.microsoft.com /en-us/download/details.aspx?id=4045.

SQL Server Express does not have any performance degradation compared to SQL Server 2012.

Windows

OS

Language

English  
Supported versions

Windows Agent

Windows 7

Windows 8

Windows XP SP3 or above

Windows Server 2003 Server

Windows Server 2008

Windows Server 2008 R2

Windows Server 2012

Windows Server 2012 R2

Windows Agent Manager

Windows Server 2008 R2 Windows Server 2012

Windows Server 2012 R2

Communication Ports between Agent and Agent Manager

TCP Port 443 (V1.1 on wards) and TCP Port 80 (V1.0) on Agent Manager for receiving events from Agents. Ports 135, 137, 139, 445 needed for NetBIOS based communication

Installing FortiSIEM Windows Agent Manager
Prerequisites
  1. Make sure that the ports needed for communication between Windows Agent and Agent Manager are open and the two systems can communicate
  2. For versions 1.1 and higher, Agent and Agent Manager communicate via HTTPS. For this reason, there is a special pre-requisite: Get your Common Name / Subject Name from IIS
    1. Logon to Windows Agent Manager
    2. Open IIS by going to Run, typing inetmgr and pressing enter
    3. Go to Default Web Site in the left pane
    4. Right click Default Web Site and select Edit Bindings.
    5. In Site Bindings dialog, check if you have https under Type column
    6. If https is available, then
      1. Select column corresponding to https and click on Edit
      2. In Edit Site Binding dialog, under SSL certificate section, click on .. button. iii. In Certificate dialog, under General tab, note the value of Issued to. This is your  Common Name / Subject Name
    7. If https is not available, then you need to bind the default web site with https.
      1. Import a New certificate. This can be done in one of two ways
        1. Either create a Self Signed Certificate as follows
          1. Open IIS by going to Run, typing inetmgr and pressing enter
          2. In the left pane, select computer name
          3. In the right pane, double click on Server Certificates
          4. In the Server Certificate section, click on Create Self-Signed Certificate... from the right pane
          5. In Create Self-Signed Certificate dialog, specify a friendly name for the certificate and click OK
          6. You will see your new certificate in the Server Certificates list
        2. Or, Import a third party certificate from a certification authority.
          1. Buy the certificate (.pfx or .cer file)
          2. Install the certificate file in your server
          3. Import the certificate in IIS
          4. Go to IIS. Select Computer name and in the right pane select Server Certificates
          5. If certificate is PFX File
            1. In Server Certificates section, click on .. in right pane
            2. In the Import Certificate dialog, browse to pfx file and put it in Certificate file(.pfx) box
            3. Give your pfx password and click Ok. Your certificate gets imported to IIS
          6. If certificate is CER File
            1. In Server Certificates section, click on Complete Certificate Request… in right pane
            2. In the Complete Certificate Request dialog, browse to CER file and put it in File name section
            3. Enter the friendly name, click Ok. Your certificate gets imported to IIS . b.  Bind your certificate to Default Web Site
          7. Open IIS by going to Run, typing inetmgr and pressing enter
          8. Right click on Default Web Site and select Edit Bindings… In Site Bindings… dialog, click on Add..
          9. In Add Site Binding dialog, select ‘https’ from Type drop down menu
          10. The Host name is optional but if you want to put it, then it must be the same as the certificate’s common name / Subject name
          11. Select your certificate from SSL certificate: drop down list
  • Click
  1. Your certificate is now bound to the Default Web Site.
  1. Enable TLS 1.2 for Windows Agent Manager 2.0 for operating with FortiSIEM Supervisor/Worker 4.6.3 and above. By default SSL3 / TLS 1.0 is enabled in Windows Server 2008-R2. Hence, before proceeding with the server installation, please enable TLS 1.2 manually as follows.
    1. Start elevated Command Prompt (i.e., with administrative privilege)
    2. Run the following commands sequentially as shown.
    3. Restart computer
Procedures
  1. On the machine where you want to install the manager, launch either the FortiSIEMServer-x86.MSI (for 32-bit Windows) or FortiSIEMSer ver-x64.MSI (for 64-bit Windows) installer.
  2. In the Welcome dialog, click Next.
  3. In the EULA dialog, agree to the Terms and Conditions, and then click Next.
  4. Specify the destination path for the installation, and then click Next.

By default the Windows Agent Manager will be installed at C:\Program Files\FortiSIEM\Server.

  1. Specify the destination path to install the client agent installation files, and then click Next.

By default these files will be installed at C:\FortiSIEM\Agent. The default location will be on the drive that has the most free storage space. This path will automatically become a shared location that you will access from the agent devices to install the agent software on them.

  1. In the Database Settings dialog,
    1. Select the database instance where metrics and logs from the Windows devices will be stored.
    2. Select whether you want to use Windows authentication, otherwise provide the login credentials that are needed to access the SQL Server instance where the database is located.
    3. Enter the path where FortiSIEM Agent Manager database will be stored. By default it is C:\FortiSIEM\Data
  2. Provide the path to the FortiSIEM Supervisor, Worker, or Collector that will receive information about your Windows devices. Click Next.
  3. In the Administrator Settings dialog, enter username and password credentials that you will use to log in to the Windows Agent Manager.

Both your username and password should be at least six characters long.

  1. (New in Release 1.1 for HTTPS communication between Agent and Agent Manager) Enter the common name/ subject name of the

SSL certificate created in pre-requisite step 2

  1. Click Install.
  2. When the installation completes, click Finish.
  3. You can now exit the installation process, or click Close Set Up and Run FortiSIEM to log into your FortiSIEM virtual appliance.

 

 

Installing FortiSIEM Windows Agent
Prerequisites
  1. Windows Agent and Agent Manager need to be able to communicate – agents need to access a path on the Agent Manager machine to install the agent software.
  2. Starting with Version 1.1, there is a special requirement if you want user information appended to file/directory change events. Typically file/directory change events do not have information about the user who made the change. To get this information, you have to do the following steps. Without this step, File monitoring events will not have user information. a. In Workgroup Environment:
    1. Go to Control Panel
    2. Open Administrative Tools
  • Double click on Local Security Policy
  1. Expand Advanced Audit Policy configuration in the left-pane
  2. Under Advanced Audit Policy, expand System Audit PoliciesLocal Group Policy Object
  3. Under System Audit Policies – Local Group Policy Object, select Object Access
  • Double-click on Audit File System in the right-pane
  • Audit File System Properties dialog opens. In this dialog, under Policy tab, select Configure the following audit events. Under this select both Success and Failure check boxes
  1. Click Apply and then OK
  2. In Active Directory Domain Environment: FortiSIEM Administrator can use Group Policies to propagate the above settings to the agent computers as follows:
  3. Go to Control Panel
  4. Open Administrative Tools
  • Click on Group Policy Management
  1. In Group Policy Management dialog, expand Forest:<domain_name> in the left-pane
  2. Under Forest:<domain_name>, expand Domains
  3. Under Domains, expand <domain_name>
  • Right-click on <domain_name> and click on ‘Create a GPO in this domain, and link it here…
  • New GPO dialog appears. Enter a new name (e.g., MyGPO) in Name text box. Press
  1. MyGPO appears under the expanded <domain_name> in left-pane. Click on MyGPO and click on the Scope tab in the right-pane.
  2. Under Scope tab, click on Add in Security filtering section
  3. Select User, Computer or Group dialog opens. In this dialog click the Object Types xii. Object Types dialog appears, uncheck all options and check the Computers option. Click OK.
  • Back in the Select User, Computer or Group dialog, enter the FortiSIEM Windows Agent computer names under Ente r the object name to select area. You can choose computer names by clicking the Advanced’ button and then in Advanced dialog clicking on the Find Now
  • Once the required computer name is specified, click OK and you will find the selected computer name under Security Filtering.
  1. Repeat steps (xi) – (xiv) for all the required computers running FortiSIEM Windows Agent. xvi. Right click on MyGPO in the left-pane and click on Edit. xvii.  Group Policy Management Editor In this dialog, expand Policies under Computer Configuration.
  • Go to Policies > Windows Settings > Security Settings > Advanced Audit Policy Configuration > Audit Policies > Object Access > Audit File System.
  • In the Audit File System Properties dialog, under Policy tab select Configure the following audit Under this, select both Success and Failure check boxes.
Procedure

Installing one agent

  1. Log into the machine where you want to install the agent software as an adminstrator.
  2. Navigate to the shared location on the Windows Agent Manager machine where you installed the agent installation files in Step 5 of Instal ling FortiSIEM Windows Agent Manager.

The default path is C:\FortiSIEM\Agent.

  1. In the shared location, double-click on the appropriate .MSI file to begin installation.

FortiSIEMAgent-x64.MSI  is for the 64-bit Agent,  while FortiSIEMAgent-x86.MSI is for the 32-bit Agent

  1. When the installation completes, go to Start > Administrative Tools > Services and make sure that the FortiSIEM Agent Service has a status of Started.

Installing multiple agents via Active Directory Group Policy

Multiple agents can be installed via GPO if all the computers are on the same domain.

  1. Log on to Domain Controller
  2. Create a separate Organization unit for containing all computers where FortiSIEM Windows Agent have to be installed.
    1. Go to Start > Administrative Tools > Active Directory Users and Computers
    2. Right click on the root Domain on the left side tree. Click New > Organizational Unit
    3. Provide a Name for the newly created Organizational Unit and click
    4. Verify that the Organizational Unit has been created.
  3. Assign computers to the new Organizational Unit.
    1. Click Computers under the domain. The list of computers will be displayed on the right pane
    2. Select a computer on the right pane. Right click and select Move and then select the new Organizational Unit. c. Click
  4. Create a new GPO
    1. Go to Start > Administrative Tools > Group Policy Management
    2. Under Domains, select the newly created Organization Unit
    3. Right click on the Organization Unit and select Create and Link a GPO here…
    4. Enter a Name for the new GPO and click OK.
    5. Verify that the new GPO is created under the chosen Organizational Unit
    6. Right click on the new GPO and click Edit. Left tree now shows Computer Configuration and User Configuration
    7. Under Computer Configuration, expand Software Settings.
    8. Click New > Package. Then go to AOWinAgt folder on the network folder. Select the Agent MSI you need – 32 bit or 64 bit. Click

OK.

  1. The selected MSI shows in the right pane under Group Policy Editor window
  2. For Deploy Software, select Assigned and click
  1. Update the GPO on Domain Controller
    1. Open a command prompt
    2. Run gpupdate /force
  2. Update GPO on Agents
    1. Log on to the computer
    2. Open a command prompt
    3. Run gpupdate
    4. Restart the computer
    5. You will see FortiSIEM Windows Agent installed after restart

Upgrade

Upgrade Overview
Upgrading from 3.7.6 to latest
  1. First upgrade to 4.2.1 following steps in here. This involves OS migration
  2. Upgrade from 4.2.1 to 4.3.1 following steps in here. This involves SVN migration
  3. Upgrade from 4.3.1 to 4.5.2. This is a regular upgrade – single node case and multi-node case
  4. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  5. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.2.x to latest
  1. Upgrade to 4.3.1 following steps in here. This involves SVN migration.
  2. Upgrade from 4.3.1 to 4.5.2. This is a regular upgrade – single node case and multi-node case
  3. Upgrade from 4.5.2 to 4.6.3 following steps in here. This involves TLS 1.2 upgrade.
  4. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.3.1 to latest
  1. Upgrade from 4.3.1 to 4.5.2. This is a regular upgrade – single node case and multi-node case
  2. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  3. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.3.3 to latest
  1. Do the special pre-upgrade step as in here.
  2. Upgrade to 4.5.2. This is a regular upgrade – single node case and multi-node case
  3. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  4. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.4.x, 4.5.1 to latest
  1. Upgrade to 4.5.2. This is a regular upgrade – single node case and multi-node case
  2. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  3. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.5.2 to latest
  1. Upgrade to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  2. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.6.1 to latest
  1. Do the special pre-upgrade step as in
  2. Upgrade to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  3. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.6.2 to latest
  1. Upgrade to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  2. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case

Upgrading from 4.6.3 to latest

  1. Upgrade to 4.7.1. This is a regular upgrade – single node case and multi-node case

Upgrading Windows Agents

FortiSIEM Windows Agent Upgrade is covered in Upgrading FortiSIEM Windows Agent and Agent Manager

Migrating from 3.7.x versions to 4.2.1

The 4.2 version of FortiSIEM uses a new version of CentOS, and so upgrading to version 4.2 from pervious versions involves a migration from those versions to 4.2.x, rather than a typical upgrade. This process involves two steps:

  1. You have to migrate the 3.7.6 CMDB to a 4.2.1 CMDB on a 3.7.6 based system.
  2. The migrated 4.2.1 CMDB has to be imported into a 4.2.1 system.

Topics in this section cover the migration process for supported hypervisors for both migrations in-place and using staging systems. Using a stagi ng system requires more hardware, but minimizes downtime and CMDB migration risk compared to the in-place method. If you decide to use the in-place method, we strongly recommend that you take snapshots for recovery.


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Moving CMDB to a separate Database Host

Moving CMDB to a separate Database Host

It is desirable to move the CMDB (postgres) database to a separate host for the following reasons:

  1. In larger deployments, reduce the database server load on the supervisor node in order to allow more resources for application server and other backend modules
  2. Whenever high availability for CMDB data is desired, it is easier and cleaner to set up separate hosts with postgres replication that are managed separately than do this on the embedded postgres on the supervisor. This is especially true in AWS environment where AWS Postgresql Relational Database Service (RDS) is just a few clicks to set up a DB instance that replicates across availability zones and automatically does failover
Freshly Installed Supervisor

 

Install separate Postgresql DB servers or AWS RDS instance in Multi-AZ mode. Use Postgresql version 9.1 or greater. I’ll use the RDS
example in the remaining steps. For instance, let’s say the hostname of RDS in us-west-2 region is

phoenixdb.XXXXXX.us-west-2.rds.amazonaws.com on port 5432 with username ‘phoenix’, DB name ‘phoenixdb’ and password ‘YYYYYYYY’. You will need to allow super and worker nodes to be able to connect to port 5432 on the RDS service. You will have to change security groups to allow this

  1. Make sure the above RDS host is reachable from FortiSIEM supervisor
  2. Install FortiSIEM supervisor node and configure it as usual including adding a license
  3. Stop all the running services so that CMDB will not be modified further 5. Dump the CMDB data in the local postgres DB into a local file 6.  Import schema/data into the external postgres.
  4. Change phoenix_config.txt to add DB_SERVER_* info
  5. Change glassfish application server’s domain.xml to point to the external CMDB server
  6. Change phoenix_config.txt to remove checking for postgres process 10. Disable postgres from starting up

 

 

Production / Existing Supervisor
  1. Install and have the external postgres ready as described at the beginning of the previous section
  2. Take point-in-time snapshots of supervisor to revert back if you hit any issue
  3. Stop crond on super, and wait for phwatchdog to stop
  4. Stop Apache on super and all workers so that collectors start buffering events
  5. Shutdown the worker nodes while you move CMDB out
  6. Follow the instructions from “Freshly Installed Supervisor” to complete the steps
Related articles

FortiSIEM Windows Agent and Agent Manager Install

Moving CMDB to a separate Database Host

FortiSIEM Windows Agent and Agent Manager Install

FortiSIEM can discover and collect performance metrics and logs from Windows Servers in an agent less fashion via WMI. However agents are
needed when there is a need to collect richer data such as file integrity monitoring and from a large number of servers.

This section describes how to setup FortiSIEM Windows Agent and Agent Manager as part of FortiSIEM infrastructure.

 

 


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

Using NFS Storage with AccelOps

Using NFS Storage with AccelOps

When you install FortiSIEM, you have the option to use either local storage or NFS storage. For cluster deployments using Workers, the use of an NFS Server is required for the Supervisor and Workers to communicate with each other. These topics describe how to set up and configure NFS servers for use with FortiSIEM.

Configuring NFS Storage for VMware ESX Server

This topic describes the steps for installing an NFS server on CentOS Linux 6.x and higher for use with VMware ESX Server. If you are using an operating system other than CentOS Linux, follow your typical procedure for NFS server set up and configuration.

  1. Login to CentOS 6.x as root.
  2. Create a new directory in the large volume to share with the FortiSIEM Supervisor and Worker nodes, and change the access permissions to provide FortiSIEM with access to the directory.
  3. Check shared directories.

Related Links

Setting Up NFS Storage in AWS

 

Using NFS Storage with Amazon Web Services

Setting Up NFS Storage in AWS

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

Setting Up NFS Storage in AWS

Youtube Talk on NFS Architecture for AWS

Several architecture and partner options for setting up NFS storage that is highly available across availability zone failures are presented by an AWS Solutions Architect in this talk (40 min) and link to slides.

Using EBS Volumes

These instructions cover setting up EBS volumes for NFS storage. EBS volumes have a durability guarantee that is 10 times higher tha n traditional disk drives. This is because data in traditional disk drives is replicated within an availability zone for component failures (RAID equivalent), so adding another layer of RAID does not provide higher durability guarantees. EBS has an annual failure rate (AFR) of 0.1 to 0.5%. In order to have higher durability guarantees, it is necessary to take periodic snapshots of the volumes. Snapshots are stored in AWS S3, which has 99.999999999% durability (via synchronous replication of data across multiple data centers) and 99.99% availability. see the topic Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS for more information.

Using EC2 Reserved Instances for Production

If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as opposed to on-demand instances.

  1. Log in to your AWS account and navigate to the EC2 dashboard.
  2. Click Launch Instance.
  3. Review these configuration options:
Network and

Subnet

Select the VPC you set up for your instance.
Public IP Clear the option Automatically assign a public IP address to your instances if you want to use VPN.
Placement

Group

A placement group is a logical grouping for your cluster instances. Placement groups have low latency, full-bisection 10Gbps bandwidth between instances. Select an existing group or create a new one.
Shutdown

Behavior

Make sure Stop is selected.
Enable

Termination

Protection

Make sure Protect Against Accidental Termination is selected.
EBS

Optimized

Instance

An EBS optimized instance enables dedicated throughput between Amazon EBS and Amazon EC2, providing improved performance for your EBS volumes. Note that if you select this option, additional Amazon charges may apply.
  1. Click Next: Add Storage.
  2. Add EBS volumes up to the capacity you need for EventDB storage.

EventDB Storage Calculation Example

At 5000 EPS, you can calculate daily storage requirements to amount to roughly 22-30GB (300k events are 15-20MB on

average in compressed format stored in EventDB). In order to have 6 months of data available for querying, you need to have 4-6TB of storage. On AWS, the maximum EBS volume is sized at 1TB. In order to have larger disks, you need to create software RAID-0 volumes. You can attach, at most 8 volumes to an instance, which results in 8TB with RAID-0. There’s no advantage in using a different RAID configuration other than RAID-0, because it does not increase durability guarantees. In order to ensure much better durability guarantees, plan on performing regular snapshots which store the data in S3 as described in Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS. Since RAID-0 stripes data across these volumes, the aggregate IOPS you get will be the sum of the IOPS on individual volumes.

  1. Click Next: Tag Instance.
  2. Under Value, enter the Name you want to assign to all the instances you will launch, and then click Create Tag.

After you complete the launch process, you will have to rename each instance to correspond to its role in your configuration, such as Supervisor, Worker1, Worker2.

  1. Click Next: Configure Security Group.
  2. Select Select an Existing Security Group, and then select the default security group for your VPC.

FortiSIEM needs access to HTTPS over port 443 for GUI and API access,  and access to SSH over port 22 for remote management, which are set in the default security group. This group will allow traffic between all instances within the VPC.

  1. Click Review and Launch.
  2. Review all your instance configuration information, and then click Launch.
  3. Select an existing or create a new Key Pair to connect to these instances via SSH.

If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the private key and store it in a secure location accessible from the machine from where you usually connect to these AWS instances.

  1. Click Launch Instances.
  2. When the EC2 Dashboard reloads, check that all your instances are up and running.
  3. Select the NFS server instance and click Connect.
  4. Follow the instructions to SSH into the volumes as described in Configuring the Supervisor and Worker Nodes in AWS Configure the NFS mount point access to give the FortiSIEM internal IP full access.
# Update the OS and libraries with the latest patches

$ sudo yum update -y

 

$ sudo yum install -y nfs-utils nfs-utils-lib lvm2

$ sudo su –

# echo  Y | mdadm –verbose –create /dev/md0 –level=0–chunk=256–

# mdadm –detail –scan >  /etc/mdadm.conf

# cat /etc/mdadm.conf

# dd if=/dev/zero of=/dev/md0 bs=512count=1

# pvcreate /dev/md0

# vgcreate VolGroupData /dev/md0

# lvcreate -l 100%vg -n LogVolDataMd0 VolGroupData

# mkfs.ext4 -j /dev/VolGroupData/LogVolDataMd0

# echo “/dev/VolGroupData/LogVolDataMd0 /data       ext4    defaults        1 1”

# mkdir /data

# mount /data

# df -kh

# vi /etc/exports

/data   10.0.0.0/24(rw,no_root_squash)

# exportfs -ar

# chkconfig –levels 2345nfs on

# chkconfig –levels 2345rpcbind on

# service rpcbind start

Starting rpcbind:                                          [  OK  ]

# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

raid-devices=4/dev/sdf /dev/sdg /dev/sd

>> /etc/fstab

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

In order to have high durability guarantees for FortiSIEM data, you should periodically create EBS snapshots on an hourly, daily, or weekly basis and store them in S3. The EventDB is typically hosted as a RAID-0 volume of several EBS volumes, as described in Setting Up NFS Storage in AWS. In order to reliably snapshot these EBS volumes together, you can use a script, ec2-consistent-snapshot, to briefly freeze the volumes and create a snapshot. You an then use a second script, ec2-expire-snapshots, to schedule cron jobs to delete old snapshots that are no longer needed. CMDB is hosted on a much smaller EBS volume, and you can also use the same scripts to take snapshots of it.

You can find details of how download these scripts and set up periodic snapshots and expiration in this blog post: http://twigmon.blogspot.com/2013/09/installing-ec2-consistent-snapshot.html

You can download the scripts from these from these Github projects:

https://github.com/alestic/ec2-consistent-snapshot https://github.com/alestic/ec2-expire-snapshots


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM General Installation

General Installation

Configuring Worker Settings

If you are using an FortiSIEM clustered deployment that includes both Workers and Collectors, you must define the Address of your Worker nodes before you register any Collectors. When you register your Collectors, the Worker information will be retrieved and saved locally to the Collector. The Collector will then upload event and configuration change information to the Worker.

Worker Address in a Non-Clustered Environment

If you are not using an FortiSIEM clustered deployment, you will not have any Worker nodes. In that case, enter the IP address of the Supervisor for the Worker Address, and your Collectors will upload their information directly to the Supervisor.

  1. Log in to your Supervisor node.
  2. Go to Admin > General Settings > System.
  3. For Worker Address, enter a comma-separated list of IP addresses or host names for the Workers.

The Collector will attempt to upload information to the the listed Workers, starting with the first Worker address and proceeding until it finds an available Worker.

 

Registering the Supervisor
  1. In a Web browser, navigate to the Supervisor’s IP address: https://<Supervisor IP> 2. Enter the login credentials associated with your FortiSIEM license, and then click Register.
  2. When the System is ready message appears, click the Here link to log in to FortiSIEM.
  3. Enter the default login credentials.
User ID admin
Password admin*1
Cust/Org ID super
  1. Go to Admin > Cloud Health and check that the Supervisor Health is Normal.
Registering the Worker
  1. Go to Admin > License Management > VA Information.
  2. Click Add, enter the new Worker’s IP address, and then click OK.
  3. When the new Worker is successfully added, click OK.

You will see the new Worker in the list of Virtual Appliances.

  1. Go to Admin > Cloud Health and check that the Worker Health is Normal.
Registering the Collector to the Supervisor

The process for registering a Collector node with your Supervisor node depends on whether you are setting up the Collector as part of an enterprise or multi-tenant deployment. For a multi-tenant deployment,you must first create an organization and add Collectors to it before you register it with the Supervisor. For an enterprise deployment, you install the Collector within your IT infrastructure and then register it with the Supervisor.

Create an Organization and Associate Collectors with it for Multi-Tenant Deployments

Register the Collector with the Supervisor for Enterprise Deployments

Create an Organization and Associate Collectors with it for Multi-Tenant Deployments
  1. Log in to the Supervisor.
  2. Go to Admin > Setup Wizard > Organizations.
  3. Click Add.
  4. Enter Organization Name, Admin User, Admin Password, and Admin Email.
  5. Under Collectors, click New.
  6. Enter the Collector Name, Guaranteed EPS, Start Time, and End Time.
  7. Click Save.

The newly added organization and Collector should be listed on the Organizations tab.

  1. In a Web browser, navigate to https://<Collector-IP>:5480.
  2. Enter the Collector setup information.
Name Collector Name
User ID Organization Admin User
Password Organization Admin Password
Cust/Org ID Organization Name
Cloud URL Supervisor URL

 

  1. Click

The Collector will restart automatically after registration succeeds.

  1. In the Supervisor interface, go to Admin > Collector Health and check that the Collector Health is Normal.
Register the Collector with the Supervisor for Enterprise Deployments
  1. Log in to the Supervisor.
  2. Go to Admin > License Management. and check that Collectors are allowed by the license.
  3. Go to Setup Wizard > General Settings and add at least the Supervisor’s IP address.

This should contain a list of the Supervisor and Worker accessible IP addresses or FQDNs.

  1. Go to Setup Wizard > Event Collector and add the Collector information.
Setting Description
Name Will be used in step 6
Guaranteed EPS This is the number of Events per Second (EPS) that this Collector will be provisioned for
Start Time Select Unlimited
End Time Select Unlimited
  1. Connect to the Collector at https://:<IP Address of the Collector>:5480.
  2. Enter the Name from step 4.
  3. Userid and Password are the same as the admin userid/password for the Supervisor.
  4. The IP address is the IP address of the Supervisor.
  5. For Organization, enter Super.
  6. The Collector will reboot during the registration, and you will be able to see its status on the Collector Health page.

Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

FortiSIEM Installing a Collector on Bare Metal Hardware

Installing a Collector on Bare Metal Hardware

You can install Collectors on bare metal hardware (that is, without a hypervisor layer). Be sure to read the section on Hardware Requirements for Collectors in Browser Support and Hardware Requirements before starting the installation process.

  1. Download the Linux collector ISO image from https://images.FortiSIEM.net/VMs/releases/CO/.
  2. Burn the ISO to a DVD so that you can boot from it to begin the setup.
  3. Before you begin the installation, make sure the host where you want to install the Collector has an Internet connection.
  4. Log into the server where you want to install the Collector as root and make sure your boot DVD is loaded.
  5. Go to /etc/yum.repos.d and make sure these configuration files are in the directory:

CentOS-Base.repo

CentOS-Debuginfo.repo

CentOS-Media.repo

CentOS-Vault.repo

  1. The system will reboot itself when installation completes.
  2. Follow the instructions in Registering the Collector to the Supervisor to complete the Collector set up.

Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!