Using NFS Storage with AccelOps

Using NFS Storage with AccelOps

When you install FortiSIEM, you have the option to use either local storage or NFS storage. For cluster deployments using Workers, the use of an NFS Server is required for the Supervisor and Workers to communicate with each other. These topics describe how to set up and configure NFS servers for use with FortiSIEM.

Configuring NFS Storage for VMware ESX Server

This topic describes the steps for installing an NFS server on CentOS Linux 6.x and higher for use with VMware ESX Server. If you are using an operating system other than CentOS Linux, follow your typical procedure for NFS server set up and configuration.

  1. Login to CentOS 6.x as root.
  2. Create a new directory in the large volume to share with the FortiSIEM Supervisor and Worker nodes, and change the access permissions to provide FortiSIEM with access to the directory.
  3. Check shared directories.

Related Links

Setting Up NFS Storage in AWS

 

Using NFS Storage with Amazon Web Services

Setting Up NFS Storage in AWS

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

Setting Up NFS Storage in AWS

Youtube Talk on NFS Architecture for AWS

Several architecture and partner options for setting up NFS storage that is highly available across availability zone failures are presented by an AWS Solutions Architect in this talk (40 min) and link to slides.

Using EBS Volumes

These instructions cover setting up EBS volumes for NFS storage. EBS volumes have a durability guarantee that is 10 times higher tha n traditional disk drives. This is because data in traditional disk drives is replicated within an availability zone for component failures (RAID equivalent), so adding another layer of RAID does not provide higher durability guarantees. EBS has an annual failure rate (AFR) of 0.1 to 0.5%. In order to have higher durability guarantees, it is necessary to take periodic snapshots of the volumes. Snapshots are stored in AWS S3, which has 99.999999999% durability (via synchronous replication of data across multiple data centers) and 99.99% availability. see the topic Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS for more information.

Using EC2 Reserved Instances for Production

If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as opposed to on-demand instances.

  1. Log in to your AWS account and navigate to the EC2 dashboard.
  2. Click Launch Instance.
  3. Review these configuration options:
Network and

Subnet

Select the VPC you set up for your instance.
Public IP Clear the option Automatically assign a public IP address to your instances if you want to use VPN.
Placement

Group

A placement group is a logical grouping for your cluster instances. Placement groups have low latency, full-bisection 10Gbps bandwidth between instances. Select an existing group or create a new one.
Shutdown

Behavior

Make sure Stop is selected.
Enable

Termination

Protection

Make sure Protect Against Accidental Termination is selected.
EBS

Optimized

Instance

An EBS optimized instance enables dedicated throughput between Amazon EBS and Amazon EC2, providing improved performance for your EBS volumes. Note that if you select this option, additional Amazon charges may apply.
  1. Click Next: Add Storage.
  2. Add EBS volumes up to the capacity you need for EventDB storage.

EventDB Storage Calculation Example

At 5000 EPS, you can calculate daily storage requirements to amount to roughly 22-30GB (300k events are 15-20MB on

average in compressed format stored in EventDB). In order to have 6 months of data available for querying, you need to have 4-6TB of storage. On AWS, the maximum EBS volume is sized at 1TB. In order to have larger disks, you need to create software RAID-0 volumes. You can attach, at most 8 volumes to an instance, which results in 8TB with RAID-0. There’s no advantage in using a different RAID configuration other than RAID-0, because it does not increase durability guarantees. In order to ensure much better durability guarantees, plan on performing regular snapshots which store the data in S3 as described in Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS. Since RAID-0 stripes data across these volumes, the aggregate IOPS you get will be the sum of the IOPS on individual volumes.

  1. Click Next: Tag Instance.
  2. Under Value, enter the Name you want to assign to all the instances you will launch, and then click Create Tag.

After you complete the launch process, you will have to rename each instance to correspond to its role in your configuration, such as Supervisor, Worker1, Worker2.

  1. Click Next: Configure Security Group.
  2. Select Select an Existing Security Group, and then select the default security group for your VPC.

FortiSIEM needs access to HTTPS over port 443 for GUI and API access,  and access to SSH over port 22 for remote management, which are set in the default security group. This group will allow traffic between all instances within the VPC.

  1. Click Review and Launch.
  2. Review all your instance configuration information, and then click Launch.
  3. Select an existing or create a new Key Pair to connect to these instances via SSH.

If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the private key and store it in a secure location accessible from the machine from where you usually connect to these AWS instances.

  1. Click Launch Instances.
  2. When the EC2 Dashboard reloads, check that all your instances are up and running.
  3. Select the NFS server instance and click Connect.
  4. Follow the instructions to SSH into the volumes as described in Configuring the Supervisor and Worker Nodes in AWS Configure the NFS mount point access to give the FortiSIEM internal IP full access.
# Update the OS and libraries with the latest patches

$ sudo yum update -y

 

$ sudo yum install -y nfs-utils nfs-utils-lib lvm2

$ sudo su –

# echo  Y | mdadm –verbose –create /dev/md0 –level=0–chunk=256–

# mdadm –detail –scan >  /etc/mdadm.conf

# cat /etc/mdadm.conf

# dd if=/dev/zero of=/dev/md0 bs=512count=1

# pvcreate /dev/md0

# vgcreate VolGroupData /dev/md0

# lvcreate -l 100%vg -n LogVolDataMd0 VolGroupData

# mkfs.ext4 -j /dev/VolGroupData/LogVolDataMd0

# echo “/dev/VolGroupData/LogVolDataMd0 /data       ext4    defaults        1 1”

# mkdir /data

# mount /data

# df -kh

# vi /etc/exports

/data   10.0.0.0/24(rw,no_root_squash)

# exportfs -ar

# chkconfig –levels 2345nfs on

# chkconfig –levels 2345rpcbind on

# service rpcbind start

Starting rpcbind:                                          [  OK  ]

# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

raid-devices=4/dev/sdf /dev/sdg /dev/sd

>> /etc/fstab

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

In order to have high durability guarantees for FortiSIEM data, you should periodically create EBS snapshots on an hourly, daily, or weekly basis and store them in S3. The EventDB is typically hosted as a RAID-0 volume of several EBS volumes, as described in Setting Up NFS Storage in AWS. In order to reliably snapshot these EBS volumes together, you can use a script, ec2-consistent-snapshot, to briefly freeze the volumes and create a snapshot. You an then use a second script, ec2-expire-snapshots, to schedule cron jobs to delete old snapshots that are no longer needed. CMDB is hosted on a much smaller EBS volume, and you can also use the same scripts to take snapshots of it.

You can find details of how download these scripts and set up periodic snapshots and expiration in this blog post: http://twigmon.blogspot.com/2013/09/installing-ec2-consistent-snapshot.html

You can download the scripts from these from these Github projects:

https://github.com/alestic/ec2-consistent-snapshot https://github.com/alestic/ec2-expire-snapshots

This entry was posted in Administration Guides, FortiSIEM on by .

About Mike

Michael Pruett, CISSP has a wide range of cyber-security and network engineering expertise. The plethora of vendors that resell hardware but have zero engineering knowledge resulting in the wrong hardware or configuration being deployed is a major pet peeve of Michael's. This site was started in an effort to spread information while providing the option of quality consulting services at a much lower price than Fortinet Professional Services. Owns PacketLlama.Com (Fortinet Hardware Sales) and Office Of The CISO, LLC (Cybersecurity consulting firm).

One thought on “Using NFS Storage with AccelOps

  1. John Morgan-House

    Hey Mike,

    Thanks for all you do. You have helped me enormously in the past. However, I am stuck on these instructions for setting up NFS. My scenario is that I have a Supervisor and an NFS instance in AWS. I have a Collector on prem. I need to make the NFS available to the Supervisor. When I try the steps in this post, I get the following:

    echo Y | mdadm –verbose –create /dev/md0 –level=0–chunk=256–
    mdadm: An option must be given to set the mode before a second device
    (–create) is listed

    I have searched around and found no direction regarding setting the mode, and also found the — is required rather than a single – unless it’s a device. Can you help me out here?

    Thanks,

    JMH

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.