Migrating an AWS EC2 NFS-based Deployment via a Staging System

Migrating an AWS EC2 NFS-based Deployment via a Staging System

Overview

Overview

Prerequisites

Create the 3.7.x CMDB Archive

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Change the IP Addresses Associated with Your Virtual Appliances

Registering Workers to the Supervisor

Setting the 4.2.1 SVN Password to the 3.7.x Password

In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Create the 3.7.x CMDB Archive

  1. Log in to your running 3.7.x production AccelOp virtual appliance as root.
  2. Change the directory to /root.
  3. Copy the migration script ao-db-migration-4.2.1.tar to the /root
  4. Untar the migration script.
  5. Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
  6. Run the archive script, specifying the directory where you want the archive file to be created.
  7. Check that the archived files were successfully created in the destination directory.

You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.

  1. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
  2. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Change the IP Addresses Associated with Your Virtual Appliances

  1. Log in to the AWS EC2 dashboard.
  2. Click Elastic IPS, and then select the public IP associated with your 4.2.1 virtual appliance.
  3. Click Disassociate Address, and then Yes, Disassociate.
  4. In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
  5. Click Disassociate Address, and then Yes, Disassociate.
  6. In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to associate it with your 4.2.1 virtual appliance.

The virtual appliance will reboot automatically after the IP address is changed.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully


Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!

This entry was posted in Administration Guides, FortiSIEM on by .

About Mike

Michael Pruett, CISSP has a wide range of cyber-security and network engineering expertise. The plethora of vendors that resell hardware but have zero engineering knowledge resulting in the wrong hardware or configuration being deployed is a major pet peeve of Michael's. This site was started in an effort to spread information while providing the option of quality consulting services at a much lower price than Fortinet Professional Services. Owns PacketLlama.Com (Fortinet Hardware Sales) and Office Of The CISO, LLC (Cybersecurity consulting firm).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.