Tag Archives: fortinet fortibalancer

ePolicy – FortiBalancer

Chapter 17 ePolicy

17.1 Overview

ePolicy is a script-based function for extending the capabilities of the FortiBalancer appliance. Using the scripts written in Tools Command Language (TCL), you can customize new features in addition to the existing functions on the FortiBalancer appliance. For example, the FortiBalancer appliance can be customized to support more application protocols, precisely control IP application traffic in both incoming and outgoing directions, or control the access of the specified client to real services.

17.2 ePolicy Elements

The elements of ePolicy are as follows:

  • Event
  • Command
  • Command invocation rule

17.2.1 Event

ePolicy uses an event-driven and message-response mechanism. The FortiBalancer appliance defines an event for every action occurring in each Client-FortiBalancer-Server connection. When such an event occurs, the FortiBalancer appliance will process traffic according to preconfigured ePolicy commands.

17.2.2 Command

ePolicy uses commands to instruct the FortiBalancer appliance to process traffic after an event occurs, such as rewriting packet contents, selecting real servers, selecting groups, or querying whether a group has valid real servers.

17.2.3 Command Invocation Rule

Command invocation rules indicate the relationship between events and commands. Based on the command invocation rules, you can flexibly combine the events and commands to intercept, detect, convert, or redirect the IP application traffic in both incoming and outgoing directions. For detailed information of events, commands, and command invocation rules, contact Fortinet Customer Support for related documents.

17.3 ePolicy Scripts

By functions, the scripts of ePolicy can be classified into the following:

  • Setting script: specifies the traffic type of a virtual service. The following table lists the setting scripts that are currently supported:

Table 17–1 Content of Setting Scripts

Traffic Type Content of the Setting Script
HTTP message::type http
Diameter message::type binary

binary_message::length_start_offset 1 binary_message::length_end_offset 3

Generic TCP message::type binary
  • Runtime script: specifies the action of the FortiBalancer appliance for an event. The content of a runtime script should be written according to the actual requirement based on events, commands, and command invocation rules. For the examples of the runtime scripts, contact Customer Support for related documents.

Global Server Load Balancing – FortiBalancer

Chapter 14 Global Server Load Balancing (GSLB)

14.1 Overview

GSLB (Global Server Load Balance) is also known as Smart DNS (SDNS). This function allows you to distribute Web traffic among a collection of servers deployed in multiple geographic locations. We will cover introduction of GSLB and the examples of GSLB configuration in this chapter.

14.2 Understanding Global Server Load Balance

In GSLB solution, the FortiBalancer appliance works as a complementary DNS server which is able to resolve a set of defined domain names based on load balancing methods. When DNS queries (typically forwarded by corporate DNS server or ISP DNS server) for the domain name are received, GSLB function will resolve the domain name with IP addresses selected from its Domain Name and IP Service Database with configured load balancing method.

SDNS maintains a local Domain Name and IP Service Database by continuously exchanging their local load (Hello message) and domain name/IP address information (Report message) with other members (also FortiBalancer appliances) in the GSLB network. For example, when an FortiBalancer appliance joins the SDNS network, the FortiBalancer appliance will continuously send its local domain name/IP address information to all other participating members (see LLB configuration). For each message transmitted, a confirmation message is expected in return. If a confirmation message is missed or a message is not updated for a period of time (3 tries), GSLB will mark the non-responsive member as down and all the domain name/IP addresses that are hosted by that FortiBalancer appliance will be removed from its local Domain Name and IP Service Database.

The SDNS process works as follows:

 

Figure 14-1 SDNS Working Mechanism

As shown in the above figure, the SDNS module will process a normal DNS request from the client as follows:

  1. The client’s browser generates a DNS request for the domain name of the Web site he wants to visit, and sends the request to its local DNS server.
  2. The local DNS server receives the request and searches in its local cache. If no cache entry hits, it will forward the request to the upper-level SDNS device. In the above example figure, the request is sent to an SDNS server at Beijing according to configurations on the local DNS server.
  3. The SDNS server at Beijing continuously collects the status information of all the application servers in its local Domain Name and IP Service Database, and then forwards the request to a proper application server based on pre-configured load balancing algorithms. In the above example, the application server at New York is selected.
  4. The SDNS server at Beijing returns back the IP addresses of the application server at New York to the local application server of the client.
  5. Upon receiving the response, the local application server forwards IP address to the client directly.
  6. The client’s browser uses the IP address in the response to open an HTTP connection with the corresponding FortiBalancer appliance and proceeds to download the Web page.

In this process, the response is cached on both the client’s local DNS server and the client’s browser.

Note: In this chapter, we will use the term “member” or “SDNS member” frequently. Either

“member” or “SDNS member” is an FortiBalancer appliance which participates in the GSLB management.

14.2.1 SDNS Member Reporter-Receiver Hierarchy

All SDNS members can be divided into two groups: SDNS server and HTTP proxy cache server. They are all FortiBalancer appliances, while HTTP proxy cache servers serve as the “reporter” and SDNS servers serve as the “receiver”.

 

Figure 14-2 SDNS Reporter-Receiver Hierarchy

SDNS Servers

SDNS servers are responsible for DNS resolving. Every HTTP proxy cache server will report its status information to SDNS servers. The status information includes:

  • The domain name configured on proxy cache servers
  • The IPs which are configured for a domain name and their status (“UP” or “DOWN”)
  • The domain name traffic on proxy servers, IP traffic and proxy traffic
  • The status of proxy cache servers (“UP” or “DOWN”)

HTTP Proxy Cache Servers

HTTP proxy cache servers are responsible for HTTP services. All kinds of HTTP requests will be directed to HTTP proxy cache servers, mostly by the SDNS servers. The HTTP proxy cache servers will collect the local status information and send it to SDNS servers at specified frequency. If an FortiBalancer appliance is a DNS server and a proxy cache server at the same time, it will report its local status information to all the SDNS servers (including itself) and collect the status information from all the proxy cache servers.

Link Load Balancing – FortiBalancer

Chapter 13 Link Load Balancing (LLB)

13.1 Overview

This chapter details the configuration of the following Inbound and Outbound Link Load Balancing implementations:

  • Single FortiBalancer appliance and two ISPs
  • Dual FortiBalancer appliances and two ISPs

13.2 Understanding LLB

LLB (Link Load Balancing) allows TCP/IP network traffic to be load balanced through up to 128 upstream Internet Service Providers (ISPs). Load balancing can be performed on egress to the Internet (outbound LLB) or on ingress from the Internet (inbound LLB). LLB methods include rr (Round Robin), wrr (Weighted Round Robin), sr (Shortest Response), dd (Dynamic Detecting) and hi (Hash IP). LLB also includes ISP/link failure detection through default gateway and link patch health checking.

The FortiBalancer appliance identifies links based on the logical port and peer MAC address. The statistics of LLB links are also collected based on the logical port and peer MAC address.

13.2.1 Outbound LLB

Outbound LLB provides optimized outbound link utilization for environments that have more than one default gateway. In essence, it allows outbound traffic to be distributed among multiple upstream/ISP routers.

For example, let’s say you have Internet connectivity provided by two ISPs: ISP1 and ISP2. ISP1 assigns address range 100.1.1.0/24 so that you may use them on your network devices. ISP2 assigns address range 200.1.1.0/24 so that you may use them on your network devices. Outbound LLB allows you to load balance outbound connections traffic through ISP1 and ISP2. Connections forwarded through ISP1 are NATTed to an address from the range assigned by ISP1. Connections forwarded through ISP2 are NATTed to an address from the range assigned by ISP2. Thus, inbound responses for those connections will return through the ISP that they were originally sent through. If Internet connectivity through one of the ISP links is lost or interrupted, the outbound traffic will no longer be sent through that ISP. All traffic will be distributed to the functional ISP.

FortiBalancer outbound LLB methods can work well on the data traffic based on TCP and UDP protocols. However, for the packets based on IP, IPsec or GRE protocols, FortiBalancer LLB methods cannot do load balancing well. To deal with this problem, FortiBalancer now supports LLB for the IP based packets so that the IP-based packets can be delivered to different links like the way the TCP/UDP packets are processed.

13.2.2 Inbound LLB

Inbound LLB provides service resiliency for inbound clients. Hosted services are visible to external clients via a separate IP address on the address space assigned by each ISP.

To illustrate, let’s use the same example ISPs as mentioned previously. All external clients trying to connect to the addresses assigned by ISP1 will be routed through ISP1’s backbone. All external clients trying to connect to addresses assigned by ISP2 will be routed through ISP2’s backbone. Inbound LLB allows you to advertise a device or Virtual IP (VIP) using two IP addresses: one from ISP1 and the other from ISP2. A DNS server on the FortiBalancer appliance will respond to queries for configured domain names. The responses will contain an IP address from ISP1 or ISP2, both representing the same device or VIP. If Internet connectivity through one of the ISP links is lost, the DNS server will not respond with the address from the failed ISP. Clients will receive only the address from the functional ISP.

13.2.3 LLB Health Check

LLB Health Check is used to check whether the link between the FortiBalancer interface and the upstream device is available. This can be accomplished by broadcasting ARP requests at regular intervals and pinging a user-defined upstream IP address. Besides, TCP-based and DNS-based health checks are also supported. The ICMP, TCP and DNS types of health check all work in the userland. This greatly improves the health check performance.

Broadcasting ARP requests at regular intervals can check the availability of the link path between FortiBalancer interface and the upstream ISP router. Pinging a user-defined upstream IP address not only can verify if the link path between FortiBalancer interface and the upstream ISP router is available, but also verify the link path between upstream ISP router and user-defined upstream IP address. Multiple upstream IP addresses can be defined for reliable checking. If any of check point is pingable, the related link is usable. This ensures that the WAN link is up before forwarding traffic across that link.

13.2.4 LLB Methods

Outbound LLB supports the following load balancing methods:

  • rr (Round Robin)
  • wrrr (Weighted Round Robin)
  • sr (Shortest Response Time)
  • dd (Dynamic Detecting)
  • hi (Hash IP)

Inbound LLB supports three load balancing methods:

  • rr (Round Robin)
  • wrrr (Weighted Round Robin)
  • proximity

Round Robin distributes each new session to gateways in an alternating (round robin) way. This is the default load balancing method.

Weighted Round Robin is similar to Round Robin except that a bias (or weight) may be assigned to each gateway so that some gateways may receive more sessions than others. This allows more traffic to be directed through an ISP with higher bandwidth capacity.

Shortest Response Time: The link with the shortest response time will get the next request. Calculation of shortest response time of a link is based on the initiation process of each TCP connection (both inbound and outbound connections). For the most accurate result, there should be enough TCP traffic instead of a few long existing TCP connections or only UDP traffic.

Note:

If neither SLB traffic nor NAT traffic goes through the system, the LLB SR method cannot work properly.

The “sr” method cannot be used to load balance IP fragments, non-TCP/UDP packets, and reassembled UDP packets.

Dynamic Detecting performs proximity calculations through all available ISP paths to the destinations. By using parallel probe arithmetic, a request from the client will be sent to a destination by different ISP paths at the same time. When the first response returns, the optimal ISP with the shortest response time will be selected for this request and other ISP connections will be failed. For future outbound traffic to the same destination, FortiBalancer appliance will choose the best ISP connection, according to the results derived from these proximity calculations. Hash IP distributes the outbound traffic among links in the way that the link with higher weight is routed with higher probability, by performing Hash operation on the source IP. When the chosen link is down, the system will carry another Hash operation on the links available. When HI is deployed as the LLB method, the IPflow function can be disabled.

Proximity: The IP address of the nearest DNS server will be sent to the client as the response. When a DNS request arrives, FortiBalancer will first search in the Eroute table reversely to find a proximity route matching the source address of the DNS request, and then give response to the client with the corresponding DNS server’s IP address (A record) according to the Eroute gateway.

13.2.5 Policy-based Routing (Eroute)

LLB policies provide the methods necessary to allow administrators to direct outbound traffic to a preferred route based on the IP address (source and destination) and service type (mail, FTP, Web, etc.).Policy based routing, unlike regular routing, allows the inclusion of the source IP, source port and destination port as well as the protocol into the route selection. For example, using routing policy can ensure that all the traffic generated by AOL instant messenger always uses the same link. If instant messenger client uses different destination IP addresses in its requests and these requests are sent through the different routes, this might confuse the server and cause login failure.

Configuring routing policy will prevent this problem. The CLI command for that would be:

FortiBalancer(config)#ip eroute aol_route 1500 0.0.0.0 0.0.0.0 0 0.0.0.0 0.0.0.0 5190 tcp gateway_ip 1

The FortiBalancer appliance supports at most 5000 eroutes.

IP region

Eroute supports IP region. Administrators are allowed to import pre-defined IP region table via HTTP, FTP or Local File method, and then execute the command “ipregion route” to apply the imported IP region table. This will generate a large number of Eroute configurations, without making complex configurations. Administrators are also allowed to export the IP region table via FTP URL or Local File method.

FortiBalancer appliance will check the contents of the file instead of the file type when an IP region file is imported. To ensure that the IP region file can be imported successfully, please pre-define the file contents strictly with the following items included in each entry:

  • IP subnet (in CIDR format)
  • Country name (optional, up to 7 bytes)
  • Brief description (optional, up to 63 bytes) These items must be separated with a “Tab”. For example:
27.8.0.0/13 CN China Unicom Chongqing Province network
27.36.0.0/14 CN China Unicom Guangdong province network

Note: 

  1. By default, there are three predefined IP region tables including “predefined_cernet”, “predefined_cnc” and “predefined_ct”. It is recommended not to use the same name with the default predefined IP region tables.
  2. The routes and proximity rules configured for IP region exist as a whole in the system.

Administrators cannot change or remove a single route or a rule.

13.2.6 LLB Session Timeout

After an ISP link has been selected for an IP flow (source IP and destination IP) pair, all traffic with the same source IP and destination IP will be sent to the same ISP. After an IP flow has been idle for a period of time, the session will be removed. Subsequent IP flows will once again be distributed based on the load-balancing algorithm.

13.2.7 Route Priority

The administrator will need to provide the method necessary to allow end-users to direct outbound traffic to a preferred route based on the IP address and protocol type. FortiBalancer appliance supports variant types of routing rules in which eroute priority is higher than priority of the default and static routes. Default routes will have priority 1 and static routes 101-132 depending on the netmask; i.e. the static route with 24-bit netmask will have priority 124 and with 32-bit netmask will have priority 132. The routes that correspond to the interfaces will have priority 2000. The routes created based on the traffic that come from the local subnet are called droutes (Direct Route) and will have priority 2000.

The following table shows the priority of different types of routes:

Table 13-1 Route Priority

Name of Route Priority
EROUTE-P 2001-2999
IROUTE, DROUTE 2000
RTS 1999
EROUTE-N 1001-1999
IPFLOW 1000-1999 (defaults to 1000)
STATIC ROUTE 101-132 (IPv4)

101-228 (IPv6)

DYNAMIC ROUTE 101-132 (IPv4)

101-228 (IPv6)

LLB LINK ROUTE 2
DEFAULT ROUTE 1

13.2.8 Link Bandwidth Management

For better link bandwidth management, the FortiBalancer appliance allows administrators to set a threshold value for the LLB link bandwidth.

When performing link selection for the outbound traffic, the system considers not only the routing policies configured for links but also the load status of each link. That is, when the current link has reached the configured bandwidth threshold, the FortiBalancer appliance will search for available links from matched routes according to the descending sequence of priorities. The FortiBalancer appliance first searches for available links from routes with the same priority as the current link. If all available links reach their bandwidth thresholds, the FortiBalancer appliance will search for available links from routes with lower priorities. If the gateways of all matched routes are down or reach the configured bandwidth thresholds, the FortiBalancer appliance will still choose the current link to transmit traffic.

In addition, the FortiBalancer appliance allows administrators to configure a priority for the LLB link bandwidth. If the priority of a matched route is higher than the LLB link bandwidth priority, the traffic will be directly forwarded through this route.

With the LLB bandwidth management function, you do not need to configure Eroutes with the same priorities for multiple links. This improves the efficiency and flexibility of link bandwidth configuration and management.

Note:

  1. If the traffic hits a RTS or IPflow route, the traffic will be directly forwarded through the relevant LLB link no matter whether the LLB link reaches the bandwidth threshold.
  2. If an Eroute has been configured with the source IP address, source mask, source port number, destination IP address, destination mask, and destination port number and these IP addresses and masks are set to 0.0.0.0 and port numbers are set to 0, the FortiBalancer appliance will not search for available links from the matched routes whose priorities are lower than 1000.

13.2.9 IPv6 Support for LLB

The FortiBalancer appliance provides broad IPv6 support for the LLB module, of which the Eroute, inbound and outbound LLB, link health check and IP region can all work in the IPv6 network environment. For the Eroute, the source IP, destination IP, gateway IP and IP region can all be configured with the IPv6 addresses. However, please note that only IPv4 or only IPv6 addresses can be configured in one IP region table. For outbound LLB, only route-based LLB supports IPv6 configurations, while NAT-based LLB does not.

Quality of Service – FortiBalancer

Chapter 12 Quality of Service (QoS)

12.1 Overview

This chapter introduces how to setup the QoS (Quality of Service) function on the FortiBalancer appliance. We setup the QoS functionality to provide administrators with the control over network bandwidth and allow them to manage the network from the business perspective, rather than the technical perspective.

12.2 Understanding QoS

QoS for networks is an industry-wide set of standards and mechanisms for ensuring high-quality performance for critical applications. By using QoS mechanisms, network administrators can use existing resources efficiently and ensure the required service level without reactively expanding or over-provisioning their networks.

QoS provides network administrators with the capacity of TCP, UDP and ICMP flow management by using queuing mechanism and packet filtering policies. By using queuing mechanism and filter rules, QoS supports both bandwidth management and priority control.

12.2.1 Queuing Mechanism

The FortiBalancer appliance has developed a queue-based QoS. Queue means a queue of network packet buffers. After the packet at the beginning of the queue has been processed, a new packet to be processed will be put at the end of the queue.

Each queue is bound with a particular network interface and controls either incoming or outgoing network traffic of that interface. QoS queues are organized in tree-like structures. On the top of a tree, a root queue is defined for either incoming or outgoing traffic of a network interface. Under the root queue, there can be multiple sub-queues. Sub-queues can also have their sub-queues. For each interface, at most two queue trees can be configured: one for the incoming traffic, and the other for the outgoing data.

Each queue is configured with bandwidth limit and priority for packet processing.

12.2.2 Packet Filter Rule

A QoS filter is a rule which associates particular network traffic with a QoS queue.

In filter rule, the network traffic is specified by five parameters: source IP subnet, source port, destination IP subnet, destination port and protocol. By this association, administrators can deploy either application-oriented or link-oriented QoS control. Normally, application-oriented filter rules have TCP or UDP ports defined while link-oriented filter rules focus on source or destination IP addresses.

12.2.3 Bandwidth Management

Bandwidth management is realized by a set of QoS filter rules which bind particular network traffic to pre-defined QoS queues with limited bandwidth settings. The QoS filter rules help FortiBalancer appliance servers to allocate appropriate bandwidth to satisfy the needs from various applications and links.

For more flexible bandwidth control, “BORROW/UNBORROW” strategy is applied to QoS queues in a tree-like structure. When a queue’s “BORROW” flag is turned on, its bandwidth can be expanded by borrowing from its parent queue. If the parent queue does not have extra bandwidth to share, it can also fall back on its parent, until the parent queue is the root queue.

12.2.4 Priority Control

Priority Control is accomplished by QoS queues in different priorities. All packets from different applications or links are firstly classified by QoS filter rules and then distributed to predefined queues enjoying the pre-configured priorities.

This priority mechanism works well especially when the network become crowded. If the traffic reaches a peak, packet loss will arise when the number of packets waiting for processing exceeds the maximum queuing buffers. Under such circumstance, the packets belonging to the queues with the highest priority will be processed in the first place, while other packets with lower priorities may be dropped. In this way, the mission-critical applications will be assigned with the highest priority, therefore the functionality of the most important transactions is guaranteed.

HTTP Compression – FortiBalancer

Chapter 10 HTTP Compression

10.1 Overview

The FortiBalancer appliance supports in-line compression of HTTP objects. By employing this licensed feature, administrators may maximize throughput to the desired site while end-users will experience quicker download speeds. This chapter describes the configuration of HTTP Compression capabilities which are part of the FortiBalancer platform. Configuration of HTTP Compression functionality can be divided into two main parts. The first part is the basic configuration and the second part is dedicated to advanced configuration.

HTTP Content Rewrite – FortiBalancer

Chapter 8 HTTP Content Rewrite

8.1 Overview

The HTTP Content Rewrite feature allows end users to visit the HTTP contents on the Web servers behind the FortiBalancer appliance. This feature aims to reduce network latency and improve user experience.

This chapter will cover the theories and configurations of the HTTP Content Rewrite feature.

High Availability – FortiBalancer

Chapter 5 High Availability (HA)

5.1 Overview

As the network applications develop, customers have higher and higher requirements for the reliability of the network and network appliances. During network planning and design, to improve the reliability of the network, some critical network appliances must have redundancy protection mechanisms. The Clustering function mentioned in the “Clustering” chapter uses the VRRP technology to solve the single-point failure. This chapter will introduce the High Availability (HA) function that newly provided by FortiBalancer appliances. The HA function not only solves the single-point failure, but also provides more policies to ensure the network reliability.

The HA function allows two or more FortiBalancer appliances to continuously exchange the running status with each other, and keep their configurations synchronized. When an appliance becomes down, other available appliances will take over the application services on the faulty appliance, which ensures the high availability of application services.

Besides, the HA function provides the Stateful Session Failover (SSF) function. With the SSF function, when a service failover occurs, connections on the service will be switched to the new appliance. This avoids the interruption of connections and therefore improves user experience. The HA function can be deployed flexibly. Besides the Active/Active and Active/Standby deployment scenarios, the HA function can be deployed among multiple appliances to achieve mutual-backup.

Clustering – FortiBalancer

Chapter 4 Clustering

4.1 Overview

The clustering function allows you to maintain high availability within a local site. With other options you can also distribute load across multiple boxes within a cluster.

4.2 Understanding Clustering

The Clustering function allows two or more FortiBalancer appliances to be grouped together to form a logical device, which provides scalability and high availability within a local site. Please refer to the following figure.

 

Figure 4-1 FortiBalancer Clustering

Clustering can be configured in Active-Standby (A/S) or Active-Active (A/A) mode:

Active-Standby mode – In Active-Standby mode, all VIPs on one FortiBalancer appliance in the cluster will be the master, and all VIPs on the other FortiBalancer appliances in the cluster are standby. In this mode, clustering supports fast failover.

Active-Active mode – In Active-Active mode, each FortiBalancer appliance in the cluster has a different master VIP or cluster ID.

4.2.1 Fast Failover

The Fast Failover (FFO) mechanism uses a new additional serial port (fast failover port on the FortiBalancer appliance mother board) to detect each other’s status transparently in a cluster (refer to the following figure). When one system powers off, panics, reboots or its interface losses carrier (link disconnection), all the traffic will be immediately switched to the other. The Clustering function with fast failover mechanism provides higher availability and much faster response time than the typical Clustering.

 

Figure 4-2 Clustering FFO Mode

4.2.2 Discreet Backup Mode

For traditional clustering, a backup and a master communicate each other’s state information through the network. If the backup does not receive the VRRP (Virtual Router Redundancy Protocol) multicast packets from the master within a specified time, it will mandatorily preempt the master. However, because of the network complexity, when something totally unexpected happens, this way may lead to a double-master state.

Discreet Backup mode is designed to prevent a double-master state. In this mode, the system determines whether a state transition is needed for the devices based on their state information detected by a heartbeat cable. This mode makes the state transition more reliable, and any VRRP packet loss will not result in double-master state.

The following shows how the Discreet Backup mode works.

 

Figure 4-3 Discreet Backup Mode Working Mechanism

  1. After turning on clustering, the device enters into Init state. Then, in order to check the health of the heartbeat cable, the Init device switches to FFO state.
  2. The device collected the health information of the heartbeat cable. If the heartbeat cable is well connected, it will switch to Backup state.
  3. Note: Even though the heartbeat cable is disconnected, the device will still switch to Backup state, and clustering will work well. However, the discreet mode is invalid.
  4. If the backup receives a higher priority VRRP packet, it will switch to Discreet Backup state.
  5. In the following events, the discreet backup will switch to Backup state:
  6. The device in Discreet Backup state receives a lower priority VRRP packet (after the successful state transition, the backup will go on to switch to Master state.).
  7. The device in Discreet Backup state will check the heartbeat cable health. If the heartbeat cable is disconnected, it will log out to Backup state.
  8. In the following events, the backup will switch to Master state:
  9. The backup receives a lower priority VRRP packet (in Preemption mode).
  10. In three continuous broadcast intervals (the default interval is 5 seconds, three intervals are 15 seconds), the backup does not receive the VRRP packet from the master.
  11. If the master receives a higher priority VRRP packet, it will switch to Backup state.
  12. If the heartbeat cable detected the master’s NIC is down, the discreet backup will switch to Master state directly.

Note: All cluster state transitions can be traced by the command “show cluster virtual transition”.

By default, discreet backup mode is turned off.

To configure the discreet backup mode, the following two commands MUST be configured first to turn on the discreet backup mode.

FortiBalancer(config)#cluster virtual ffo on

FortiBalancer(config)#cluster virtual discreet on

4.2.3 IPv6 Support for Clustering

The FortiBalancer Clustering function now supports IPv6 VIPs switchover. Both IPv4 and IPv6-based VRRP packets can be processed by the FortiBalancer appliance.

If the interface for Clustering is configured with both the IPv4 and IPv6 addresses or with only the

IPv4 address, then the IPv4-based VRRP packets will be used for communication between the FortiBalancer appliances. If only the IPv6 address is configured on the interface for Clustering, then the IPv6-based VRRP packets will be used.

Note: The VRRP packets are incompatible with each other among different OS versions. So please use the same OS version for the FortiBalancer appliances in a cluster.

4.3 Clustering Configuration

4.3.1 Clustering SLB VIPs

When using the clustering capabilities of the FortiBalancer appliance, we will first define our SLB virtual IPs that we want to use in the cluster. Each of the following sections will define the virtual IPs that we will use.

For information about SLB, please refer to the chapter Server Load Balancing (SLB).

4.3.1.1 Active-Standby: Two Nodes

Configuration Guidelines

In Active-Standby mode, one node in the cluster will be the master of the VIP, and thus active. The other node in the cluster will be in standby mode. Upon failure of the active node, the standby node will take over the VIP and become master. If preemption has been enabled on the initial master node, it will reassume mastership when it returns to a working state. Otherwise, the VIP will stay with the new master node until the node fails.

Refer to the following figure for the typical layout of Active-Standby architecture, in which:

  • FortiBalancer1 is the current master, and handles SLB traffic for VIP.
  • FortiBalancer2 is the backup, and listens for advertisements from the master. It will resume master status if FortiBalancer1 stops sending advertisements (i.e. FortiBalancer1 fails).

 

Figure 4-4 Active-Standby Two-Node Architecture

Table 4-1 General Settings of Active-Standby Two-Node Clustering

Operation Command
Configure SLB Refer to the SLB Configuration section.
Configure a virtual interface cluster virtual ifname <interface_name> <cluster_id>
Configure virtual cluster authentication cluster virtual auth <interface_name> <cluster_id> {0|1} [password]
Configure preemption cluster virtual preempt <interface_name> <cluster_id> <mode>
Configure virtual IP cluster virtual vip <interface_name> <cluster_id> <vip>
Configure priority cluster virtual priority <interface_name> <cluster_id> <priority> [synconfig_peer_name]
Enable the virtual cluster cluster virtual {on|off} [cluster_id|0] [interface_name]
Enable fast failover feature cluster virtual ffo {on|off} cluster virtual ffo interface carrier loss timeout <interface_timeout>

Configuration Example for Active-Standby SLB Clustering via CLI Now let’s start to configure FortiBalancer1 and FortiBalancer2:

Ø    Step 1 Configure SLB for both FortiBalancer1 and FortiBalancer2

FortiBalancer1(config)#slb real http “server1” 192.168.1.50 80 1000 tcp 1 1

FortiBalancer1(config)#slb real http “server2” 192.168.1.51 80 1000 tcp 1 1

FortiBalancer1(config)#slb group method “group1” rr

FortiBalancer1(config)#slb group member “group1” “server1” 1

FortiBalancer1(config)#slb group member “group1” “server2” 1

FortiBalancer1(config)#slb virtual http “vip1” 192.168.2.100 80

FortiBalancer1(config)#slb policy default “vip1” “group1”

FortiBalancer2(config)#slb real http “server1” 192.168.1.50 80 1000 tcp 1 1 FortiBalancer2(config)#slb real http “server2” 192.168.1.51 80 1000 tcp 1 1

FortiBalancer2(config)#slb group method “group1” rr

FortiBalancer2(config)#slb group member “group1” “server1” 1

FortiBalancer2(config)#slb group member “group1” “server2” 1

FortiBalancer2(config)#slb virtual http “vip1” 192.168.2.100 80

FortiBalancer2(config)#slb policy default “vip1” “group1”

  • Step 2 Configure a virtual interface name

FortiBalancer1(config)#cluster virtual ifname “port1” 100

FortiBalancer2(config)#cluster virtual ifname “port1” 100

  • Step 3 Configure virtual cluster authentication

It is recommended that you run clustering with an authentication string to avoid unauthorized participation in your cluster.

FortiBalancer1(config)#cluster virtual auth port1 100 0

FortiBalancer2(config)#cluster virtual auth port1 100 0

  • Step 4 Configure virtual cluster preemption

Now we configure FortiBalancer1 to preempt the VIP when the initial master returns online. For FortiBalancer2, it will not preempt the VIP from the master node, but will take over if the master ceases operations.

FortiBalancer1(config)#cluster virtual preempt port1 100 1

FortiBalancer2(config)#cluster virtual preempt port1 100 0

  • Step 5 Define the VIP by the “cluster virtual vip” command

FortiBalancer1(config)#cluster virtual vip “port1” 100 192.168.2.100 FortiBalancer2(config)#cluster virtual vip “port1” 100 192.168.2.100

  • Step 6 Define the priority

Cluster priority determines which node becomes the master. The node with highest priority becomes the master. Since we want FortiBalancer1 to always be master of the VIP, we will set its priority to 255. For FortiBalancer2, we will leave its priority at 100. In a two-node cluster, this is permissible. Though, when you include more nodes in your cluster, you will need to set a unique priority for each VIP to properly communicate and fail-over. To do this, use the following command:

FortiBalancer1(config)#cluster virtual priority port1 100 255 FortiBalancer2(config)#cluster virtual priority port1 100 100

Note: The state is the backup on FortiBalancer2. This is expected since it is of lower priority than the master.

  • Step 7 Turn on the clustering

FortiBalancer1(config)#cluster virtual on

FortiBalancer2(config)#cluster virtual on

  • Step 8 Turn on fast failover

FortiBalancer1(config)#cluster virtual ffo on

FortiBalancer1(config)#cluster virtual ffo interface carrier loss timeout 1000

FortiBalancer2(config)#cluster virtual ffo on

FortiBalancer2(config)#cluster virtual ffo interface carrier loss timeout 1000