LiquidObject

vSphere slotsfile 0x10000042 error

Every once and awhile when using SOIC (Storage IO Control) within ESXi the hosts will get out of wack with regards to reading and writing to the slotsfile. If you take a look at your syslog output you might see something like the below.

<14>2013-12-12T06:49:58.987Z vmsrv-13.uwgb.edu storageRM: open /vmfs/volumes//VM-LUN-1/.naa.60002ac0000000000000002900001234/slotsfile(0x10000042, 0x0) failed: No such file or directory
<14>2013-12-12T06:49:58.987Z vmsrv-13.uwgb.edu storageRM: Giving UP No such file or directory Error -1 opening SLOT file /vmfs/volumes//VM-LUN-1/.naa.60002ac0000000000000002900001234/slotsfile
<14>2013-12-12T06:49:58.987Z vmsrv-13.uwgb.edu storageRM: Error -1 in opening & reading the slot file
<14>2013-12-12T06:49:58.987Z vmsrv-13.uwgb.edu storageRM: Failed to read slot file

The correction to this is pretty fortunately is very simple.

Lets stop the SOIC service

/etc/init.d/storageRM stop

Now lets start it

/etc/init.d/storageRM start

Monitor the syslog entries for the next few seconds and you should see them clear up pretty quickly.

Original reference for this post came from Frank over at . Same issue, just seeing in a different area of vSphere.

, ,
December 12, 2013 at 1:03 am Comments (0)

FreeNAS 8.3 and VMware ESXi’s VMXNet3 adapter

The built-in networking support under FreeNAS 8.3 is only the e1000 adapter and while it does “work” it really lacks performance in a virtual environment. To get around this limitation we need to install VMware Tools to support more modern networking adapters. While this question is ask time and time again in the FreeNAS forums and around I never see a straight forward solution for adding the VMXNet3 adapter. So here we go.

We’ll assume you already have your VM deployed with one e1000 and one vmxnet3 adapter and we are just loading in the drivers.

 

Add Perl

Pull up the shell or connect via SSH to your FreeNAS VM

mount -urw /
cd /tmp
pkg_add -r perl -K
tar -xjf perl.tbz
cp lib/perl5/5.12.14/mach/CORE/libperl.so  /lib

(the build number will change as time goes on)

Add compat6x

pkg_add -r compat6x-amd64

Install VMware Tools

It is assumed you are installing with the default options.

Install VMware tools as normal via the "Install/Upgrade VMware tools" menu option
mkdir /mnt/cdrom
mount -t cd9660 "/dev/iso9660/VMware Tools" /mnt/cdrom
cd /tmp
tar zxpf /mnt/cdrom/vmware-freebsd-tools.tar.gz
umount /mnt/cdrom
cd vmware-tools-distrib
./vmware-install.pl
/usr/local/bin/vmware-config-tools.pl

Ignore the failed notice for the memory manager. At this point VMware Tools is installed but still needs some tweaking.

VMware tools tweaking

vi /usr/local/etc/rc.d/vmware-tools.sh
Look for: if [ "$vmdb_answer_VMHGFS_CONFED" = 'yes' ]; then    and change yes to xyes
Look for: if [ "$vmdb_answer_VMMEMCTL_CONFED" = 'yes' ]; then    and change yes to xyes
Look for: if [ "$?" -eq 0 -a "$vmdb_answer_VMXNET_CONFED" = 'yes' ]; then    and change yes to xyes
save and close vi (escape wq enter)
rm /etc/vmware-tools/not_configured
reboot

Now within the FreeNAS WUI (Web User Interface) add an additional network adapter, you’ll see vmxnet3 adapter called “vmx3f0”.

 

I’m seeing the following differences when sequential data (4GB iso) to and from a test system via SSD and Gigabit infrastructure.

e1000 Adapter

  • Read: 50 MB/sec to 59MB/sec (for first 2GB then 73 MB/sec)
  • Write: 33.0 MB/sec to 35 MB/sec

VMXNet3 Adapter

  • Read: 93MB/sec to 95MB/sec
  • Write: 29.5 MB/sec to 42.1 MB/sec

My VM configuration

  • vCPU: 3
  • Ram: 6GB
  • Drives: 4GB vmdk, 3×1.5TB virtual RDM
  • Raidz
  • NIC: e1000(management),VMXNET3(data)
  • VM Hardware Version: VMX-09

My host config

  • CPU: Dual Xeon e5320’s
  • Ram: 24GB ECC DDR2
  • Controllers: IBM M1015 (IT firmware), LSI 8308ELP
  • Drives: 2x500GB(hardware mirror), 3×1.5TB(7200.11)(FreeNAS virtual RDM’s)
  • NIC: Onboard Intel 1000pro
  • OS: ESXi 5.1 Update 1

Sorry, no VT-d on this host to pass through the M1015 which may be giving me a small amount of overhead running the virtual RDM’s.

, , , , , , ,
March 13, 2013 at 8:24 pm Comment (1)

NAS4Free under ESXi

 

One surprising thing I noticed when testing out NAS4Free is the lack of documentation with regards to installation on VMware. I can understand a viewpoint that a NAS is a NAS and not to be anything else, but what about working loads where that kind of raw performance is not required (granted the virtualization overhead these days should be within 5-10% of physical). In any case the below instructions are written with running ESXi 5.1 with NAS4Free 9.1.0.1, for ease of reading the directions are broken down into three sections.

 

 

Initial download and VM configuration

 

1)      Download the latest x64 release at http://www.nas4free.org/downloads.html

2)      Create a new custom Virtual Machine (Assume defaults unless otherwise specified)

a)      Guest Operating System – Other – FreeBSD 64-Bit

b)      Virtual sockets – 3  (you can use less but I was seeing a significant performance hit with less than 3)

c)      Memory: 4GB (not a hard requirement but in general the more the better)

d)     Network

i)        Number of NICs: 2

ii)      NIC 1 Adapter: e1000
(The e1000 will be used for management only as the default NAS4Free install does not correctly load the VMXNet3 driver)

iii)    NIC 2 Adapter: VMXNet3
(The VMXNet3 adapter will be used for Samba/NFS/iSCSI traffic)

e)      SCSI Controller: LSI Logic Parallel

f)       Disk: 4GB, can be thin provisioned

3)      Finish creating the VM

4)      Edit the VM

5)      Add  your additional hard disks and assign them starting at 1:0, 1:1, 1:2 (virtual RDM’s are an option as well). One VMDK per disk unless your really just feature evaluating the setup.

a)      Optionally if supported you could use Direct-Path to pass through your favorite SCSI controller

6)      Change your newly created SCSI Controller to: LSI Logic SAS
(Paravirtual does not function with the version of Vmware tools pre-bundled with the NAS distro)

7)      Select “OK” to complete the modifications

 

 

NAS4Free base install

 

1)      Boot the VM and start it off of the recently downloaded ISO

2)      Walk-through the normal installer screens selecting” Install ‘embedded’ OS on HDD/FLASH/USB”
(Full is extremely buggy at this point and only really used for NAS4Free developers)

3)  Install onto the 4GB volume

4)  After the install completes, reboot and disconnect the ISO volume

5)  Configure your mangement IP

 

 

Configuring your new VM

 

1)      Login to the web administration to the new VM

2)      Select System –> Advanced

3)      Select rc.conf

4)      We need to add some custom tuning for the VM

a)      Add – Name: vmguestd_enable – Value: Yes

b)      Add – Name: vmsetup_enable – Value: Yes

c)      Optionally (useful for debugging sometimes)

i)        Add – Name: dmesg_enable – Value: Yes

5)      Apply changes

6)      System –> Reboot

 

At this point the VM should be fully useable. If running into performance problems TOP within the VM and the vSphere performance graphs should be where to start looking. VM CPU usage and disk latency are generally the first points of issue.

Enjoy.

 

, , ,
March 9, 2013 at 4:45 pm Comments (12)

Slow snapshots with VMware

Within vSphere one of the common features available is the ability to take snapshots. For a couple of years now taking snapshots had an option called “Snapshot the virtual machine’s memory” which would snapshot the target VM with a perfect run time state snapshot.

This feature comes at a price time. Recently I’ve been going after some of the larger servers in my environment, in this particular test case some new Exchange CAS/HUB servers. When taking a snapshot normally it would complete within 5 seconds. However, with the given VM’s running 4vCPU and 8GB ram each taking a snapshot of the VM’s memory was taking over 21 minutes on creation. The issue only shows itself during creation when merging snapshots back together there is no unusual delay.

Now there is a way to enhance the perform, however it requires manually editing the vmx config file by hand, via powercli, or via the vSphere client.

Here’s how with the vSphere client.

With the given VM powered off, edit it and select the options tab.

Select Configuration Parameters and then “Add Row” twice then insert the following:

Name: mainMem.ioBlockPages
Value: 2048

Name: mainMem.iowait
Value: 2

Then select “Ok” twice and power back on the VM to test it again.

With the same VM the second time around it took just under 2 minutes and 20 seconds, saving myself almost 20 minutes per snapshot.

As with anything please test this yourself, I would assume your mileage would vary depending upon your configuration. When looking to implement this on a large scale of dozens, hundreds, ect VM’s you would need to leverage PowerCLI. To shutdown, edit, and power back on each VM.

, , , ,
July 27, 2012 at 3:13 pm Comments (0)

V-Locity 3 with Thin on Thin provisioning

Within vSphere 4.x and later (through the gui) thin provisioned VMFS volumes are an option. For those who have shared storage many storage providers also provide the option to provide Thinly provisioned LUNs. So it is foreseeable one could provision a 100GB VMDK thinly on a 1TB thinly provisioned LUN but then if the VMDK is only using 1GB then the LUN should be using close to 1GB of data assuming it is not blown out by data usage.

The Test

For testing purposes I’m using a 60GB thinly provisioned VMDK with approximately 50% fragmentation (10k files with 90k fragments) . With using vSphere 4.1 Update 1 and a HP Lefthand P4500 cluster running SAN I/Q 9.0.

Diskeeper has a product called V-Locity 3 which is suppose to offer great defragmentation options for the Thin on Thin world. The question is does the boat hold water?

Well after testing a number of LUNs there are some truth to it’s functionality and some dead weight. When looking back at the physical world with Windows Server installed on bare hardware Diskeeper Server Edition works at both preventing fragmentation to a point and defragmenting. When we look at VM’s instead Intelliwrite (http://www.diskeeper.com/blog/post/2009/11/20/Inside-IntelliWrite-technology.aspx) everything still works as expected. When looking at the automatic defrag feature, we then balloon out the volume and if left to run 24×7 would fully balloon the volume.

This is where V-Locity is suppose to step in where you take Diskeeper Server falls short for virtual servers. Be for-warned Diskeeper Administrator is required to gain full functionality of software to take advantage of (V-Aware and CogniSAN to prevent over utilization of the given VM Server or SAN). While there are some other features the one feature I was looking at in particular was the Automatic zeroing of free space

Automatic Zeroing of Free Space is an interesting feature basically offering their version of sDelete.exe at a file by file level. By design you are suppose to run the automatic defrag feature during a relatively idle period in the datacenter and the rest of the time the Automatic zeroing will reclaim the space used. If and only if a storage vMotion to a different block size occurs the space will really be reclaimed.

So as expected it does work but I would check your cost/benefit analysis before buying V-Locity for the entire enterprise. When looking at my every day production environment, I can only see three or four heavy usage systems actually benefiting significantly from this software. While the rest would run the regular version of Diskeeper Server purely for Intelliwrite.

vSphere 5.0

Now what about all the talk of vSphere 5 adding in a new version of Array Integration (VAAI) which takes care of the reclamation without a vMotion? Well, it’s there but as it currently stands there are two big issues with it:
1) VMWare has published KBs instructing everyone to disable it due to performance issues. (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427)

2) Some SAN appliances don’t support the new feature – which is my current problem

As for number one, test and see how bad the performance hit is in a lab environment first before trying it in production. Technically I don’t see why the features is recommended to be just disabled, you could have the feature disabled except during some scheduled times enabling it for a few hours at a time allowing at least a partial benefit from the feature.

For number two, start leaning heavily on your SAN providers to provide the functionality. As of this writing SAN I/Q 9.5 doesn’t support disk space reclamation (sometimes referred to as UNMAP). Hopefully this feature it will be out in the next release in a production environment scripting out the creation of LUNs to perform a storage vMotion to then finally performance destroy the existing LUN requires heavy I/O load on the SAN and an ugly solution for automation.

Conclusion

For myself I find this the biggest deal-breaker with the software isn’t the software itself as it works as designed. Unless the entire stack supports reclamation as a whole the product is *nice* but not worth it in large deployments and time consuming in small deployments.

, , , , , , ,
December 27, 2011 at 11:44 pm Comments (0)

vCenter Appliance and multiple network cards

The VMWare vCenter Appliance is a quick and easy method of deploying vCenter. But as with any 1.0 release there are many features which people are looking for before utilizing in a production environment, beyond the third-party plug-in, update manager, linked-mode, ect there is a lack of ability to support multiple network cards. When checking on the web interface for the vCenter Appliance administration there is no ability to address more than one network card.

Luckily this can be corrected with use of the command line.

 

  1. Edit your vCenter Appliance and add the additional network card (VMXNet3)
  2. Via the vCenter Appliance console login locally with root
  3. cd /etc/sysconfig/network
  4. cp ifcfg-eth0 ifcfg-eth1
  5. vi ifcfg-eth1
  6. Update the device to be eth1 and correct the IP addressing information
  7. Restart

Now the vCenter appliance will be available on two addresses in potentially different subnets.

, ,
December 16, 2011 at 3:10 pm Comment (1)