FreeNAS 11 iSCSI with ESXi 6.5 Lab Setup

In my home lab setup I’ve currently got 1 FreeNAS box and 1 ESXi box. They’re connected using a multipath iSCSI link on cheap quad-gigabit cards I brought used. This setup works quite well for home lab use and provides a safe enough place to store my VMs. In this article I’ll guide you through the setup process I’ve used to get iSCSI working between FreeNAS and ESXi.

I’ll presume you’ve got a fresh FreeNAS and ESXi install on both systems and quad or dual gigabit links between them.


iSCSI Setup FreeNAS Side

For for link between the two systems we will want to have 1 subnet per connection for iSCSI to work correctly. We’ll run the network with a /24 subnet like this:

FreeNAS   Cable   ESXi
10.0.0.1 <------> 10.0.0.2
10.0.1.1 <------> 10.0.1.2
10.0.2.1 <------> 10.0.2.2
10.0.3.1 <------> 10.0.3.2

On the FreeNAS WebGUI go Network > Interfaces > Add Interface.

Name the interface the same name as the NIC: option, type in the IP, select the netmask and finally, type in mtu 9000 in the options. Having an MTU of 9000 will help with performance.

Do this for all interfaces you’re planning on using for iSCSI.

Now we can setup iSCSI itself, go to Sharing > Block (iSCSI) > Portals > Add Portal. You’ll want to click Add extra Portal IP at the bottom and add all your interfaces.

Go to Initiators > Add Initiator and click OK, that is all here.

Next up, Targets > Add Target. Pick a name and select your portal group and initiator group.

Under Extents > Add Extent you’ll need to pick either a device (zvol) you’ve created or create a file to share. In my case I’ve created a zvol to use as a device extent.

Go to Associated Targets > Add Target / Extent and add the target and extent you’ve created.

The final step is to enable the iSCSI service by going to Services and clicking Start Now for iSCSI. Also tick the Start on boot box.

You should now have FreeNAS ready, we can move on to ESXi.


ESXi Network Setup

Open up your WebGUI for ESXi 6.5 and navigate to Networking > Physical NICs. Take note of the vmnic numbers on the adapter you’re going to use. You can easily tell that vmnic0 is the odd one out and there for not part of my quad-gigabit card in this picture. MAC Addresses on the same card will generally only have the last hex value incremented by 1.

Once you know what ports to add, go to the Virtual Switches tab and click Add standard virtual switch. You’ll want to set the MTU to 9000 as we did in FreeNAS. Make a Virtual Switch for each network connection you’ll have.

In the top tab, go to VMkernel NICs and click Add VMkernel NIC. Fill in the name, select the switch, put the MTU to 9000 and give a static IP. Create one for each Virtual Switch.

Plugging in the Ethernet cables can be tricky if you don’t know what physical interfaces have what IP. You can look on the Interfaces page in FreeNAS and Physical NICs page in ESXi to see what links are up/down. Also, you can try pinging from FreeNAS Shell to test connections.


iSCSI Setup on ESXi

Now that the network settings are out of the way we can configure iSCSI itself. Go to Storage > Adapters > Configure iSCSI and check the enable box. Under Network port bindings add all of your connections. Also, add all your FreeNAS iSCSI IPs to Dynamic targets. Click Save Configuration and when you go back in it should look like this (The part in blue will auto fill once you save and click on Configure iSCSI again).

Once you’re out of that, click on the Datastores tab and go New datastore.

  1. Create new VMFS datastore
  2. Select the iSCSI share from FreeNAS and Name it
  3. Use full disk
  4. Finish

You should be able to use the datastore now however there is some more steps to use a round robin configuration for optimal performance.

For the next steps we will enable SSH and set iSCSI to use round robin. To enable SSH go to Manage > Services and click on TSM-SSH. Above the list of Services you’ll see the option to Start it.


Configure Round Robin Path Selection

If you’re on Windows, grab PuTTY and SSH in to the ESXi box. Here you can type in this to get the ID of the iSCSI share.

esxcli storage nmp device list

The information you want is the naa.id I’ve highlighted in red below.

naa.6589cfc00000009e27c03355442167c8
Device Display Name: FreeNAS iSCSI Disk (naa.6589cfc00000009e27c03355442167c8)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_i
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config: Current Path=vmhba64:C4:T0:L0
Path Selection Policy Device Custom Config:
Working Paths: vmhba64:C4:T0:L0
Is USB: false

To change the path selection police to round robin you can use this command replacing NAA_HERE with your naa.ID

esxcli storage nmp device set --device NAA_HERE --psp VMW_PSP_RR

Next we can change the iSCSI IOPS setting from the default 1000 to 1, doing this will help a lot with performance. To do that, get the naa. plus the first 4 numbers (in my example, it would be naa.6589) and placce them in to this command.

for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep NAA_HERE`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done

You can run the first command again to make sure everything has worked. If the settings I’ve marked in red are the same, it’s working. You’ll be presented with something like this.

naa.6589cfc00000009e27c03355442167c8
Device Display Name: FreeNAS iSCSI Disk (naa.6589cfc00000009e27c03355442167c8)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=1,TPG_state=AO}}
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=iops,iops=1,bytes=10485760,useANO=0; lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba64:C3:T0:L0, vmhba64:C4:T0:L0, vmhba64:C14:T0:L0, vmhba64:C9:T0:L0
Is USB: false

Conclusion

Now with this setup you can have a killer NAS and safe, high performance VM storage on a budget. VMs will boot quite snappy thanks to the ARC (adaptive read cache) in ZFS if you have plenty of ram in your FreeNAS box.

Share this postTweet about this on TwitterShare on FacebookPin on Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *