UPDATE: I have updated this post as we had to re-deploy a new template into AWS and ran into some issues following the original instructions. The below is working now
As the company that I am working on has a directive from the CIO to go SaaS or Cloud first, I was tasked with attempting to get the current CMX instance into the cloud. Note: Whilst Cisco currently has a CMX Cloud offering, it currently does not support detect and locate feature required to the CMX use case.
Whilst Cisco CMX BU advised me that they (or TAC) would not support the installation process of CMX into AWS (as AWS do not support OVA deployments), they did advise me that they (and TAC) would support it if the install was successful. Further to this the CMX BU asked for a list of the steps we needed to under go in order to get it working in AWS.
This Post will advise the steps required to get CMX working within AWS (I would confirm with your Cisco support managers that they will support your install prior to going into production).
The AWS EC2 Instance type that we deployed was the i2.8xlarge due to it was the only EC2 instance that meet all the requirements of the CMX High End specifications. Details of the i2.8xlarge are:
|Model||vCPU||Mem (GiB)||Storage (GB)|
|i2.8xlarge||32||244||8 x 800 SSD|
Steps to create CMX image and migrate to AWS (I was given these steps from the Sysadmin team and advised that AWS has provided the powershell app to them)
- Import OVA into on premise VMWare environment
- Startup the VM
- Backup the existing /boot/grub.conf
- Cp /boot/grub.conf /boot/grub.conf.bak
vi /etc/grub.conf and delete the first boot entry as this points to a modified linux kernel which aws does like and will make the upload fail
- Remove the custom kernel config from the grub.conf entirely
- Shutdown the VM
- Export the VM to OVA
- Steps 9 – 15 were provided by AWS you might need to contact your AWS resources about this for your environment.
- Upload to using EC2 command line tools
- Export as OVF from VMWare (this is NOT the option that puts everything into one file). If you already selected the one file option, right click the file under lucy.ocio and select 7-zip → open archive and extract the .vmdk file.
- Upload the VMDK using:
$S3_BUCKET_NAME="shanemo-ami-templates" $AWS_ACCESS_KEY="XXXX" $AWS_SECRET_KEY="XXXX"
- Once you have export the OVA in file format (OVF), open a powershell window server and cd to the directory of the OVF files output
ec2-import-instance .\is-test05-poc-v1\is-test05-poc-v1-disk1.vmdk --region "ap-southeast-2" --prefix RHEL7AMIv1 -p Linux -t "t2.micro" -f "vmdk" -a "x86_64" --subnet subnet-3fecb55a --instance-initiated-shutdown-behavior stop -b "$S3_BUCKET_NAME" -o "$AWS_ACCESS_KEY" -O "$AWS_ACCESS_KEY" -w "$AWS_SECRET_KEY" -W "$AWS_SECRET_KEY"
- Once VM has been uploaded and instance has been created, convert it to AMI
- The upload process may fail. Please just re-run the upload in case there was a problem….
- Run the following command to check the status of the upload:
ec2-describe-conversion-tasks -O “AWS-access_key” -W “secret-key” --region "ap-southeast-2"
- Once this says it is complete, please allow for up to 30 minutes for the conversion to take place. An easy way to see is looking at the EC2 instance and if it has a “root” volume attached.
- Start the amazon instance
- Restore the original grub configuration
- Cp /boot/grub.conf.bak /boot/grub.conf
- Reboot to insure it works
- Shutdown instance
- Export instance to ami
- Share ami with relevant amazon zones
- Deploy template and then adjust /etc/hosts (the file has the immutable flag so it can’t be directly edited. We need to unset the flag, then edit it)
$su - $chattr -i /etc/hosts $vi /etc/hosts $:1,$ s/orginal_hostname/new_hostname/g
- This will change the hostname to the new hostname. The reason we need to do this is because the networking is broken so the standard reference points in this file are pointing to something else.
- Also ensure the IP address of the hostname is in the hosts file as a fully qualified name
- Edit /etc/sysconfig/network – and add the full name of the server
$chattr +i /etc/hosts
Following installation you will need to configure CMX via the WebUI and SSH as you will not have console access within AWS. Please refer to this post for how to do this
You will need to ensure that traffic between the WLCs, Prime is able to access CMX.