Auto Scaling contains OpenStack packages that allow DNS Auto Scaling for virtual DNS cache acceleration using a NIOS Grid. Auto Scaling helps you ensure that you have required number of resources to handle the load in your application. You can create an Auto Scaling group and specify the minimum, maximum and desired number ofresources for each group. Auto Scaling ensures that the group contains the required number of resources at all times, neither exceed the maximum limit nor fall short of resources. With Auto Scaling, you can adjust scaling to best meet the needs of your applications by automatically increasing or decreasing the computing capacity of the associated application. For more information about virtual DNS cache acceleration, refer to Configuring DNS Cache Acceleration on IB-FLEX.
Infoblox supports Auto Scaling for OpenStack only.
Infoblox supports Auto Scaling on the IB-FLEX and the following vSOT platforms: IB-V815, IB-V825, IB-V1415, IB-V1425, IB-V2215, IB-V2225, IB-V4015, and IB-V4025. You can install Auto Scaling on both SRIOV and Non-SRIOV servers, but only on the IPv4 interfaces.
You can create an Auto Scaling group using the orchestration component, Standard OpenStack Heat, and specify the type of resource that must be scaled. You can also define policies that indicate when and how to scale the resource. For example, you can define an Auto Scale group of server resources, and configure it to trigger the new server instance using OpenStack when the aggregate CPU utilization of the entire group exceeds the CPU utilization threshold for a specified period of time. You can also specify to halt the number of servers if the CPU utilization has been low for a longer period. For more information about OpenStack heat, refer to https://github.com/infobloxopen/heat-infoblox.
For virtual DNS cache acceleration, NIOS Grid members are triggered using OpenStack based on the queries per second. The new member acts as a secondary DNS. Note that Auto Scaling for virtual DNS cache acceleration feature consists of two components, heat-infoblox and ceilometer-infoblox, which are Python packages that must be installed with pip. For more information, refer to https://github.com/infobloxopen/infoblox-netmri.
The heat-infoblox package contains Heat resources classes for Grid members and name server group entries, along with the supporting code. The Heat resources classes enable you to add and remove Grid members, enable or disable DNS service for the Grid members, and add the Grid members as secondary servers in a name server group or remove the Grid members. Note that all these are orchestrated through the heat engine.
You must restart the Heat engine after you install and configure the package.
The ceilometer-infoblox package contains code that enables SNMP polling of the Infoblox NIOS instances within OpenStack to gather DNS queries per second. After installing the OpenStack Ceilometer, you must install the ceilometer-infoblox package on each compute node. For more information about the configuration details, refer to https://github.com/infobloxopen/ceilometer-infoblox.
You must configure OpenStack Ceilometer in the compute node and restart the compute polling agent.
Configuring NIOS Tenant
The templates directory within the
heat-infoblox/doc/templates/ package contains a
setup.sh script that creates a tenant, user and appropriate images.
To configure a NIOS tenant:
- Log in to the OpenStack admin user account, copy the images to the
heat-infoblox/doc/templates/directoryand execute the
- Next, switch to the OpenStack nios user account and launch a heat stack using the following command:
heat stack-create -f member-server.yaml -P"mgmt_network=management-net;lan1_network=service-net" member-server
- To view the progress, log in to the in the Orchestration -> Resource Types section of the OpenStack Horizon UI or through the heat CLI utility.
- Execute the
member.yamlscript to create a member in NIOS. Note that this script neither creates a server nor does it add the member to the name server group.
- Execute the
member-server.yamlscript. This script creates a member in NIOS, adds the member to the name server group, and launches a Nova server and pushes parameters to the new server, so that it can automatically join the Grid.
Configuring NIOS Grid Master
To configure the Grid Master:
- Configure a default name server group with the Grid Master as primary and name it as default.
- Generate license pools for all the required licenses.
- Get the Grid Master certificate and save it in
echo | openssls_client -connect gm-ip:443 2>/dev/null | openssl x509
- Configure SNMP and set the community string to public.
If you set up a management interface on the Grid Master, then you must add an interface on the router and connect it to management. You must also provide the MGMT IP address in
Configuring Auto Scale
Auto Scaling uses groups and members automatically to scale up or scale down the resources based on the QPS alarm. Based on the threshold and the period that is defined in the respective QPS alarm, corresponding QPS alarm is generated either to scale up or scale down the resources.
For example, if the
qps_alarm_high is set to 5000 and the period is set to 120, then the
scaleup_policy when the
threshold is more than 5000 continuously. The server adds the number of members to the Grid based on the value set for
scaling_adjustment in the
scaleup_policy. If the
scaling_adjustment is set to 2, the server adds two members. Similarly, for
qps_alarm_low the number of members are reduced from the Grid.
You must define the LAN1 port and Grid member details in the
autoscale-member.yaml.The URL and the certificate mentioned in
autoscale-member.yaml is used to join the member to the Grid. This also contains the name of the group to which the member belongs. Note that the queries generated for the respective member are directed through the Anycast Loopback address, so the server knows when to scale up when the load increases.
To install the vNIOS software package and configure Auto Scale in the OpenStack environment, complete the following:
- Download the OpenStack server. Example: 10.39.50.13 (non-SRIOV server).
- Install relevant package for the operating system, either CentOS for KVM or RHEL (Red Hat Enterprise Linux) for KVM-based OpenStack, that you use.
- Download the qcow2 file from the Infoblox Technical Support site. For information, see Requirements. Upload the qcow2 image file in to the OpenStack server. Example:
- Log in using the command
ssh root@openstack server ip.Example:
- Ensure that the files mentioned below exist in the
[root@rhel72-10-39-50-13 ~]# source keystonerc_admin
[root@rhel72-10-39-50-13 ~(keystone_admin)]# cd /opt/templates/
- Create and configure the Grid Master using the following command:
root@rhel72-10-39-50-13 templates(keystone_admin)]#openstack stack create -f yaml -t gm.yaml --parameter "imageName=DCA_354869" GMaster
Ensure that you update the
config-gm.sh. It automatically uses the floating IP or the LAN1 IP for SRIOV to generate the certificate. It then starts the DNS service, adds required records, like FQDN and A records, to the zones, and creates name server groups to which the member belongs. The following command generates an environment file
10.39.52.162is the floating IP of the Auto Scale member:
[root@rhel72-10-39-50-13 templates(keystone_admin)]#./config-gm.sh 10.39.52.162
- Next, create an Auto Scale stack and launch the Auto Scale using the following command:
[root@rhel72-10-39-50-13 templates(keystone_admin)]#openstack stack create -e gm-10.39.52.162-env.yaml -f yaml -t autoscale.yaml autoscaleThe IP address
10.39.52.162is the IP address of the Grid Master that joins the Auto Scale member to the Grid.
Note that after you create the Auto Scale stack that uses
autoscale.yaml, which in turn uses
autoscale-member.yamlto direct queries when the query rate is more than the
thresholdand scale up resources.
This page has no comments.