LiquidObject

Runaway process checking

Recently I ran into an issue with PHP exhaustion on a Windows Server running IIS. In this scenario the PHP-CGI.exe process would continue to spawn additional instances as load on the server would increase but over time the application pool would struggle and begin to slow to a crawl. In the past I have seen other applications during various iterations of development run into the same issue where if you run into more than “x” instances of an application it is unhealthy or less than “y” instances it is not running properly.

 

$myprocess = "php-cgi"
$myserver = "WebServer"
$mydomain = "liquidobject.com"
$mail_server = "mail.liquidobject.com"
$mail_recipient = "my_support_team@liquidobject.com"
$toomany = 40
$waytoomany = 80

$mail_sender = "$myserver@$mydomain"
$mailreport_subject = "Script: $myserver $myprocess count"
$body = " "

function SendEmailReport
{
    $body = [string]::join([environment]::NewLine, ($body)) 
    $msg = New-Object System.Net.Mail.MailMessage $mail_sender, $mail_recipient, $mailreport_subject, $body
    $client = New-Object System.Net.Mail.SmtpClient $mail_server
    $client.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials
    $client.Send($msg)
}


$mycount =  (Get-Process -Name $myprocess).count

if($toomany -lt $mycount)
{
	$body = "We have $mycount $myprocess processes, something is unusual."
	if($waytoomany -lt $mycount)
        {
             IISRESET /STOP
             IISRESET /START
             $body = "We have $mycount $myprocess processes, IIS has been reset."
        }
        SendEmailReport
}

In this case we are sending an email notification to the fictional “support team” when more than 40 instances of the php-cgi process are running and in the event no one responds by the time 80 instances are hit the site is automatically bounced to ensure it’s availability.

The simple method for checking is the use of Task Scheduler and call up the script every 5 minutes, pretty simple yet effective.

, , , ,
May 11, 2013 at 7:30 am Comments (0)

FreeNAS 8.3 and VMware ESXi’s VMXNet3 adapter

The built-in networking support under FreeNAS 8.3 is only the e1000 adapter and while it does “work” it really lacks performance in a virtual environment. To get around this limitation we need to install VMware Tools to support more modern networking adapters. While this question is ask time and time again in the FreeNAS forums and around I never see a straight forward solution for adding the VMXNet3 adapter. So here we go.

We’ll assume you already have your VM deployed with one e1000 and one vmxnet3 adapter and we are just loading in the drivers.

 

Add Perl

Pull up the shell or connect via SSH to your FreeNAS VM

mount -urw /
cd /tmp
pkg_add -r perl -K
tar -xjf perl.tbz
cp lib/perl5/5.12.14/mach/CORE/libperl.so  /lib

(the build number will change as time goes on)

Add compat6x

pkg_add -r compat6x-amd64

Install VMware Tools

It is assumed you are installing with the default options.

Install VMware tools as normal via the "Install/Upgrade VMware tools" menu option
mkdir /mnt/cdrom
mount -t cd9660 "/dev/iso9660/VMware Tools" /mnt/cdrom
cd /tmp
tar zxpf /mnt/cdrom/vmware-freebsd-tools.tar.gz
umount /mnt/cdrom
cd vmware-tools-distrib
./vmware-install.pl
/usr/local/bin/vmware-config-tools.pl

Ignore the failed notice for the memory manager. At this point VMware Tools is installed but still needs some tweaking.

VMware tools tweaking

vi /usr/local/etc/rc.d/vmware-tools.sh
Look for: if [ "$vmdb_answer_VMHGFS_CONFED" = 'yes' ]; then    and change yes to xyes
Look for: if [ "$vmdb_answer_VMMEMCTL_CONFED" = 'yes' ]; then    and change yes to xyes
Look for: if [ "$?" -eq 0 -a "$vmdb_answer_VMXNET_CONFED" = 'yes' ]; then    and change yes to xyes
save and close vi (escape wq enter)
rm /etc/vmware-tools/not_configured
reboot

Now within the FreeNAS WUI (Web User Interface) add an additional network adapter, you’ll see vmxnet3 adapter called “vmx3f0”.

 

I’m seeing the following differences when sequential data (4GB iso) to and from a test system via SSD and Gigabit infrastructure.

e1000 Adapter

  • Read: 50 MB/sec to 59MB/sec (for first 2GB then 73 MB/sec)
  • Write: 33.0 MB/sec to 35 MB/sec

VMXNet3 Adapter

  • Read: 93MB/sec to 95MB/sec
  • Write: 29.5 MB/sec to 42.1 MB/sec

My VM configuration

  • vCPU: 3
  • Ram: 6GB
  • Drives: 4GB vmdk, 3×1.5TB virtual RDM
  • Raidz
  • NIC: e1000(management),VMXNET3(data)
  • VM Hardware Version: VMX-09

My host config

  • CPU: Dual Xeon e5320’s
  • Ram: 24GB ECC DDR2
  • Controllers: IBM M1015 (IT firmware), LSI 8308ELP
  • Drives: 2x500GB(hardware mirror), 3×1.5TB(7200.11)(FreeNAS virtual RDM’s)
  • NIC: Onboard Intel 1000pro
  • OS: ESXi 5.1 Update 1

Sorry, no VT-d on this host to pass through the M1015 which may be giving me a small amount of overhead running the virtual RDM’s.

, , , , , , ,
March 13, 2013 at 8:24 pm Comment (1)

SCOM 2012 database grooming

Approximately three months back we migrated to SCOM 2012 and have been slowly rebuilding our configuration. In defining the configuration we forgot one key part, database grooming customization. By default some data is kept for a couple of days but a lot of day is kept for either 180 days or 400 days. While in a lab environment this may be ok since you are only monitoring a few systems in production this will cause some unexpected database growth issues. Below you can see the defaults configured.

 
Dataset name                   Aggregation name     Max Age     Current Size, Kb
—————————— ——————– ——- ——————–
Alert data set                 Raw data                 400         8,440 (  0%)
Client Monitoring data set     Raw data                  30             0 (  0%)
Client Monitoring data set     Daily aggregations       400            16 (  0%)
Configuration dataset          Raw data                 400       133,616 (  0%)
DPM event dataset              Raw data                 400             0 (  0%)
Event data set                 Raw data                 100       594,592 (  2%)
Microsoft.Exchange.2010.Dataset.AlertImpact Raw data                   7             0 (  0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        3             0 (  0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations       182             0 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                 400            16 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations       400             0 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data                   7             0 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations       400             0 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Raw data                   3        17,424 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       225,104 (  1%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations       182       104,592 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         1,616 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31         6,480 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations       182           688 (  0%)
Performance data set           Raw data                  10     4,984,944 ( 13%)
Performance data set           Hourly aggregations      400    26,558,360 ( 69%)
Performance data set           Daily aggregations       400     3,047,320 (  8%)
State data set                 Raw data                 180        37,280 (  0%)
State data set                 Hourly aggregations      400     2,481,936 (  6%)
State data set                 Daily aggregations       400       117,280 (  0%)
 

To prevent the database from growing to hundreds of GB we need to adjust the retention policies. In order to accomplish this we need to download the dwdataarp.exe utility from Microsoft at: http://blogs.technet.com/b/momteam/archive/2008/05/14/data-warehouse-data-retention-policy-dwdatarp-exe.aspx

With this installed open up an administrative command line on the SCOM server we can begin.

First run: dwdatarp.exe -s localhost -d “OperationsManagerDW”
This will show you your current configuration, now we need to tweak some of the retentions. The below example is a mix of retention periods in an environment with Exchange 2010 and DPM 2012 installed.

dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Alert data set" -a "Raw data" -m "30"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Event data set" -a "Raw data" -m "30"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Client Monitoring data set" -a "Daily aggregations" -m "60"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Configuration dataset" -a "Raw data" -m "30"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "DPM event dataset" -a "Raw data" -m "30"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Microsoft.Exchange.2010.Reports.Dataset.Availability" -a "Raw data" -m "30"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Microsoft.Exchange.2010.Reports.Dataset.Availability" -a "Daily aggregations" -m "90"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Microsoft.Exchange.2010.Reports.Dataset.TenantMapping" -a "Daily aggregations" -m "90"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data" -a "Daily aggregations" -m "90"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data" -a "Daily aggregations" -m "90"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Performancedata set" -a "Raw data" -m "7"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Performance data set" -a "Hourly aggregations" -m "14"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "Performance data set" -a "Daily aggregations" -m "90"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "State data set" -a "Raw data" -m "7"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "State data set" -a "Hourly aggregations" -m "14"
dwdatarp.exe -s localhost -d "OperationsManagerDW" -ds "State data set" -a "Daily aggregations" -m "90"

 

 

After making the change and waiting for the automated grooming to complete I ended up dropping the database size from 42GB and growing to 21GB.

 

Dataset name                   Aggregation name     Max Age     Current Size, Kb
—————————— ——————– ——- ——————–
Alert data set                 Raw data                  30         4,656 (  0%)
Client Monitoring data set     Raw data                  30             0 (  0%)
Client Monitoring data set     Daily aggregations        60            16 (  0%)
Configuration dataset          Raw data                  30       133,552 (  1%)
DPM event dataset              Raw data                  30             0 (  0%)
Event data set                 Raw data                  30       352,040 (  2%)
Microsoft.Exchange.2010.Dataset.AlertImpact Raw data                   7             0 (  0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        3             0 (  0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations       182             0 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                  30            16 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations        90             0 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data                   7             0 (  0%)
Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations        90             0 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Raw data                   3        17,680 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       226,384 (  1%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations        90       104,144 (  1%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         1,616 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31         6,416 (  0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations        90           688 (  0%)
Performance data set           Raw data                  10     5,047,512 ( 30%)
Performance data set           Hourly aggregations       14     6,600,016 ( 39%)
Performance data set           Daily aggregations        90     3,047,104 ( 18%)
State data set                 Raw data                   7        23,840 (  0%)
State data set                 Hourly aggregations       14     1,064,864 (  6%)
State data set                 Daily aggregations        90       117,088 (  1%)

 

Also, if you want to speedup the time it takes for the cleanup to occur from within SQL you can run the following command to reduce the time period between cleanup.

update StandardDatasetAggregation set GroomingIntervalMinutes = '11' where GroomingIntervalMinutes = '240'

After cleanup has finished, run the below to change the configuration back to what it was

update StandardDatasetAggregation set GroomingIntervalMinutes = '240' where GroomingIntervalMinutes = '11'
March 11, 2013 at 11:44 am Comments (0)

Purging the old login ScriptPath

In the advance in directory topologies over the years the use of batch or vbscript login scripts are slowly being phased out in favor of group policy based solutions which offer greater flexibility. Recently I had the need to purge the ScriptPath field from all users within an organization.

$myusers = Get-ADUser -Filter * -SearchBase "DC=liquidobject,DC=com" -properties ScriptPath | select-object SamAccountName,ScriptPath

Write-Host $myusers.count " users loaded."

foreach($user in $myusers)
{
    if($user.ScriptPath.length -gt 0)
    {
        Write-Host "Cleaning: " $user.SAMAccountName #" - " $user.ScriptPath
        Set-ADUser -identity $user.SAMAccountName -ScriptPath $NULL
        Sleep 0.1
    }
    else
    {
       # Write-Host "Already Clean: " $user.SAMAccountName
    }
    
}

October 17, 2012 at 3:44 pm Comments (0)

Bad Fragmentation

Shortly before moving I ran across the worst fragmented by percentage system I’ve ever seen. To give some background the system was an IBM x336 running Windows 2000 Server (yes, 2000 Server) and it had been up and in production for 5+ years. As you can see the 10.6GB partition for the C-drive is reporting 124% fragmentation!

 

The screenshot from the above is taken by an old version of Defraggler (version 1.21) with the defrag taking about 12 hours.

Why is it that no-one ever does any maintenance on their systems?

,
November 14, 2011 at 10:06 pm Comments (0)

Relocation – Part 2

Well, that took longer than expected. Finally settled down a bit enough to start writing again. On a side note, sometimes Budget VPS solutions don’t cut it when running a MySQL database.

November 14, 2011 at 9:59 pm Comments (0)

« Older Posts