VMware ESXi on OCI

The OCI Baremetal servers are very platform agnostic. Today you can already run Hyper-V, Oracle VM and KVM natively on OCI, so I thought is was worth a try to get VMware ESXi also running on OCI.

Disclaimer: This is NOT officially supported by Oracle! I do think that the actual hardware, Oracle X7 servers are supported by VMware, but maybe not in this context. In this article you will see that you can run ESXi on OCI, but with some networking limitation due to the architecture of OCI’s offbox I/O management.

Creating a custom ESXi boot image

The easiest way to get ESXi running on OCI is to build a custom image and upload this to OCI. You can use ESX or Workstation for this. Just create a virtual machine, install VMware ESXi 6.7 in it.

The minimum Boot Volume size on OCI is 50GB, so I would recommend creating a virtual machine matching that disk size, else the disksize will go to waste, as on OCI an iSCSI lun will be created of always at least that value.

Important: After the install, power off the Virtual Machine, Do NOT let it reboot al the way up, as I found it sometimes then will not work properly on OCI any more.

Now we have the ESX VM, export the VM so we have a copy of the VMDK file to boot ESXI.

The next step is to upload this vmdk file into your OCI Object Storage service. To do this, create a bucket (or use an existing one).

While the VMDK might be 50GB, the actual qcow2 file is only ~350MB as empty diskspace is not stored in the file. So the upload should go fairly quickly.

For the next step, we need to create a pre-authenticated link to this file. Click on the […] right to the file and select create pre-authenticated request.

Copy this link, do not loose it! you can not retrieve it back later. Now we can go the compute section in OCI, to the Custom Images section, click on the [Import Image] button.

Here we specify the pre-authenticated link we made, set the operating system to Linux and set the Launch mode to Native. Click on [Import Image] and the process will start. This should not take too long, but I did find that this page does not refresh automatically.

After the Custom Image is created you can edit the boot image, so that only Baremetal Instances can use it. I have tested all the current available BM instances with both Intel and AMD Epyc processors and they all seem to work well.

OCI Networking configuration

You can now start launching ESXi instances, but to make sure we can connect to them I recommend you modify your security list for the VCN you are using.

  • Change the ICMP (ping) rule to allow all types and codes. This will allow you to ping the ESXi server remotely, which is handy to check if the instance is working.
  • Enable 443 for Managing the ESXi servers
  • Enable 902 for making remote Console Connections to your VMs

Creating ESX Instances

You can now create an new Instance. Click on the [Change Image Source] button and select your ESXi boot image from the Custom Images tab. Choose the shape you want and you are all set. You can ignore the SSH key files, as you will need to use the root password you provided during the ESXi installation in your local VM.

Make sure the VM is connected to the correct VCN that has the right security list and after a minute or 2 your ESXi server should be up and running.

You will notice during start up your get at some point about 5 pings returned, and then a few time outs again. After a while the service will respond again and you should be ready to login to the ESXi Console.

The first configuration change I would recommend is to change the Power Policy to high performance, this ensures your VMs will run with the best possible speed 🙂

Setting up VMFS datastores

Before we can run Virtual Machines, we will need a  VMFS to store our VMs on. If you have selected a Baremetal server with local NVMe disks you can create a VMFS datastore on those disks. If you are using a Baremetal server with no local storage, we need to connect to remote store. The oracle cloud can provide Block Volumes using iSCSI and it can provide NFS shares using the OCI File Storage Service.

While I was able to assign Block Volumes to the ESXi host and make them visible by configuring the iSCSI parameters, I was NOT able to create a VMFS datastore on it. So this will need some future investigation.

For now we can create a VMFS datastore using the File Storage service.

Create in the OCI File Storage portal a new file system. After it is created if will provide your with the IP address you can use to connect to this storage.

On your ESXi server enable the existing VM kernel NIC to support “Provisioning”

Now your ESXi kernel will be able to connect to the file service. So you can now go into your storage configuration and connect it to an NFS share.

Use the mount IP address and Share name from the File Storage service and you have now created a datastore. In my case the File Storage server is reporting 8,388,608 TB available, so I guess you will not quickly run out of disk space 🙂 If you create multiple ESXi Servers, they can of course all be connected to the same NFS share, so you can easily access all your VMs and ISO files from all ESXi servers.

You can now upload an ISO file or an existing Virtual Machine file and start running Virtual Machines.

Networking

There is one final thing we need to take care of and that is the networking. Unique about OCI is that I/O is managed off box. So the OCI network is controlling which MAC address and IP addresses are available on which physical ports. You can unfortunately not create any VMs with any MAC and IP address as their communication will be blocked.

On your ESXi Instance you need to create per Virtual Machine a VNIC.

You can choose if your VNIC only have a private IP address or an Public&Private Address. You will need to configure the Virtual Machine with the MAC address specified for the VNIC. You will also need to create a port group just for this VM, using the VLAN ID specified by the VNIC and connect your VM to that port group.

Now your VM is ready to run/install and you will be able to connect to it via the internal and public network, based on the OCI security rules.

Conclusion:

So yes you can run VMware ESXi native on OCI Baremetal servers. Due to OCI’s offbox I/O management, you do need to manually setup the networking for your VMs to match the OCI network settings. This limits some of the VMware functionality like vmotion. Possible you can overcome this is you use NSX and route all traffic thru one MAC/IPaddress, but this need further research.

You can only use Local NVMe and File Storage. The Object Storage via ISCSI does seem to connect, but I was not able to create a VMFS Volume on it.

The File Storage service seems to perfom well and you can easily add / remove ESXi servers to your environment and have them all directly connect to it.

After you once created the custom ESXi boot image, it is easy to spin of any ESXi baremetal servers on-demand and delete them when not needed. I found that provisioning a new ESXi servers takes less then 5 minutes to be completely up and running.