Click an Ad

If you find this blog helpful, please support me by clicking an ad!

Tuesday, November 5, 2013

Checking for VMware Snapshots

So besides the standard way of simply looking at vCenter to see snapshots, there are a couple of other ways to accomplish this. The first way is manual, and uses RVTools, which is a GREAT tool for gathering lots of information about your VMware environment and its virtual machines. I run it every Monday just to make sure thing look healthy. Veeam's monitoring software, VeeamONE will also alert you when a snapshot is active for longer than a specified time, which can be altered by editing the alarm. VeeamONE is free, but if you want more features you have to go full version which is not free, of course.

I recommend both of these free tools to anyone who wants a health checkup or wants to create some quick documentation of their VMware Environment. These tools helped me immensely when I inherited mine, so that I could hit the ground with good information and a list of things to fix first.

I should not leave out vCheck, which a VERY full featured script offerend by that can give you a LOT of good info. As a matter of fact, I created the script at the end of this entry from one of the subscripts of the vCheck project; the one that checks for snapshots.

I created this script to run at 6AM and let me know if there are any snapshots running before the start of the business day. Reasons for these snapshots can be a lot of things, but any of them bear looking into more closely. I could have a Veeam Backup that is stuck, like happened to me last week. I, or another admin, might have left a snapshot running (which merits a flogging!).

Also, you'll notice that the first thing it does is retrieve your credentials, which I created beforehand using the technique outlined in this post.

Without further ado, here's the script:

#Gets the credentials to facilitate connection to the Vcenter Server
$password = Get-Content c:\PSCred\mycred.txt | ConvertTo-SecureString
$credential = New-Object System.Management.Automation.PsCredential "",$password

#Gets some other miscellaneous variables for use in the email alert
$smtpServer = ""
$MailFrom = ""
$MailTo = ""
$VISRV = "vCenterServer"

#Add the snapin for PowerCLI (The VMware Powershell module)
add-pssnapin Vmware.VimAutomation.Core

#Connect to your vCenter Server, using the credentials we created to authenticate
connect-viserver -server -credential $credential

#Function that finds the user that created the snapshot, so you can flog them
function Find-User ($username){
if ($username -ne $null)
$usr = (($username.split("\"))[1])
$root = [ADSI]""
$filter = ("(&(objectCategory=user)(samAccountName=$Usr))")
$ds = new-object system.DirectoryServices.DirectorySearcher($root,$filter)
$ds.PageSize = 1000

#Function that gets snapshot info
function Get-SnapshotTree{
param($tree, $target)

$found = $null
foreach($elem in $tree){
if($elem.Snapshot.Value -eq $target.Value){
$found = $elem
if($found -eq $null -and $elem.ChildSnapshotList -ne $null){
$found = Get-SnapshotTree $elem.ChildSnapshotList $target

return $found

#Function that gets more detailed snapshot info
function Get-SnapshotExtra ($snap){
$guestName = $snap.VM # The name of the guest

$tasknumber = 999 # Windowsize of the Task collector

$taskMgr = Get-View TaskManager

# Create hash table. Each entry is a create snapshot task
$report = @{}

$filter = New-Object VMware.Vim.TaskFilterSpec
$filter.Time = New-Object VMware.Vim.TaskFilterSpecByTime
$filter.Time.beginTime = (($snap.Created).AddSeconds(-5))
$filter.Time.timeType = "startedTime"

$collectionImpl = Get-View ($taskMgr.CreateCollectorForTasks($filter))

$dummy = $collectionImpl.RewindCollector
$collection = $collectionImpl.ReadNextTasks($tasknumber)
while($collection -ne $null){
$collection | where {$_.DescriptionId -eq "VirtualMachine.createSnapshot" -and $_.State -eq "success" -and $_.EntityName -eq $guestName} | %{
$row = New-Object PsObject
$row | Add-Member -MemberType NoteProperty -Name User -Value $_.Reason.UserName
$vm = Get-View $_.Entity
$snapshot = Get-SnapshotTree $vm.Snapshot.RootSnapshotList $_.Result
$key = $_.EntityName + "&" + ($snapshot.CreateTime.ToString())
$report[$key] = $row
$collection = $collectionImpl.ReadNextTasks($tasknumber)

# Get the guest's snapshots and add the user
$snapshotsExtra = $snap | % {
$key = $_.vm.Name + "&" + ($_.Created.ToString())
$_ | Add-Member -MemberType NoteProperty -Name Creator -Value $report[$key].User

#Function to send mail. I normally just use a one-liner, but I'm reusing code, and this was here.
Function SnapMail ($Mailto, $snapshot)
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$msg.From = $MailFrom

$msg.Subject = "Snapshot Reminder"

$MailText = @"
There is a snapshot active on $($snapshot.VM) which was taken on $($snapshot.Created).

Name: $($snapshot.Name)

Description: $($snapshot.Description)

$msg.Body = $MailText

#Cycles through any snapshots found, and send an email for each one
foreach ($snap in (Get-VM | Get-Snapshot)){
$SnapshotInfo = Get-SnapshotExtra $snap
SnapMail $mailto $SnapshotInfo

#Disconnect from the vCenter server
Disconnect-VIServer -Confirm:$false

Thursday, October 31, 2013

Storing Credentials for Powershell Scripts

The next script I want to show you has a prerequisite of storing your credentials in a file for later use. I figured I'd better cover that first, so here goes.

These credentials are user speceific (duh) but they are also computer specific. Let's say I was to run a scheduled task that uses The 'ScheduledTaskUser' user, and that this task will run from 'TaskServer'.

In order to make this work, this process must be run on 'TaskServer'!

1. Create the securepassword string from the computer that's going to use it:
read-host -assecurestring | convertfrom-securestring | `
out-file c:\temp\ScheduledTask_Credentials.txt

2. When you press enter to execute the command above, you will be on a blank line. Enter the password and press enter.

3. The password is now saved on the computer as a secure string (not in plaintext) in the file that you specified.

4. In your script file, build the credential:

$password = get-content `
c:\temp\ScheduledTask_Credentials.txt | convertto-securestring

$credentials = new-object `
-typename System.Management.Automation.PSCredential `
-argumentlist "DomainName\ScheduledTaskUser",$password

5. Use the credential. You can pass $credentials anywhere you can use the -credential parameter, like so:

get-wmiobject win32_service `
-computer AnotherServer `
-credential $credentials

You'll notice above that I used the backtick character ` to allow me to continue the script on the next line. I'm trying to do this more and more for script readability......

Monday, October 28, 2013

My 100th Post! -- Troubleshooting Change Block Tracking in Veeam

I never thought I'd make it to 100 posts, but here I am! My initial goal was  post per day, but I soon realized that was unrealistic. I have a family, a job, and a life after all. So I'm proud that I've created 100 nuggets of information that can help people. In the end, that's what IT is here to do: help people.

Change Block Tracking (CBT, and I don't mean Nuggets) is a VMware feature that Veeam uses extensively. It's what lets Veeam decide which block have changed since the last backup, so it can skip everything that HASN'T changed. Using it speeds up your backups a lot. The first time you back up a VM, Veeam has to go through every block of data, but thereafter it uses CBT to be selective.

Once in a while, one of my Veeam backups will report a warning, and I'll see error messages for a specific VM like these:

10/22/2013 11:27:26 PM :: Disk [VMFS-VOL3] VMNumber1/VMNumber1_1.vmdk size changed. Changed block tracking is disabled.

This was caused by expanding a drive on a Windows Server 2008 R2 server. You can do this while the VM is running, which is really handy, but you should reset the CBT on the VM before backups run. I'll outline that process here in a bit.

Another error you might see is this:
10/21/2013 11:16:08 PM :: Cannot use CBT: Soap fault. Error caused by file /vmfs/volumes/5122935d-291050d2-ee9c-ac162d75bf50/VMNumber1/VMNumber1.vmdkDetail: '', endpoint: ''

This was caused by using svMotion to migrate a VM to another datastore, but it can be caused by other things, like using vConverter on a VM. I've seen CBT break for no reason at all, but that's been few and far between.

To fix this issue, perform the following steps to reset CBT. It will require two shutdowns, so get a maintenance window if you need one.

Short version/process outline:
1. Shut down the VM
2. Disable CBT
3. Delete the -CTK files from the filesystem
4. Power on the VM
5. Shut Down the VM
6. Enable CBT
7. Power on the VM

The step-by-step version:
1. Shut down the VM
2. In vCenter, edit the settings of the VM
3. Click on the "Options" tab
4. Click on "General"
5. Click "Configuration Parameters"
6. Click on the "Name" column header twice to sort alphabetically
7. set ctkEnabled to FALSE
8. Scroll down and find each virtual hard drive listing. They look like this: scsix:x.ctkEnabled and set these to FALSE as well. YOU MUST DO THIS FOR ALL HARD DISKS. JUST LOOK FOR THE "CTK" IN THE NAME
9. Click OK as many times as it takes to get back out.
10. Ensure the VM is selected and that you are viewing its "Summary" tab
11. On the right-hand side is a box called "Resource", and within it there is storage listed.
12. Right-click on the listed storage (for example, VMFS-VOL17) and select "Browse Datastore"
13. Look for a folder with the same name as the VM
14. Go into that folder
15. For each virtual disk and snapshot disk there is a .ctkfile. For example:


Delete ALL of the -CTK files (make sure you are choosing wisely!)

16. Now, boot the VM, and wait until VMware Tools has loaded. The easiest way to accomplish this is to open the VM's console, then use the menu (VM->Power->Shut Down Guest) to shut the VM down. If that option is greyed out, then VMware Tools is not running. Just try again in a few seconds.
17. Shut it down
18. Now, we will turn CBT back on
19. In vCenter, edit the settings of the VM
20. Click on the "Options" tab
21. Click on "General"
22. Click "Configuration Parameters"
23. Click on the "Name" column header twice to sort alphabetically
24. set ctkEnabled to TRUE
25. Scroll down and find each virtual hard drive listing. They look like this: scsix:x.ctkEnabled and set these to TRUE as well. YOU MUST DO THIS FOR ALL HARD DISKS. JUST LOOK FOR THE "CTK" IN THE NAME
26. Click OK as many times as it takes to get back out.
27. Power the VM on, and you're done.

Friday, October 25, 2013

VMware vCenter with a SQL Express back-end can get full!

If you have a smaller VMware infrastructure, you are probably running vCenter with its database on a SQL Express instance. This works fine, but eventually that database will fill up if you don't adjust the retention setting in vCenter, because SQL Express is limited to a maximum database size of 4GB.

This happened recently to a friend of mine, but in my haste to create a monitor in PRTG to monitor the size I neglected to get a screenshot of what that moment looks like. For that, dear reader, I apologize.

What I CAN tell you is how to find out how big yours is, and how to keep it from growing too large. The size of the database can be found by looking in C:\Program Files\Microsoft SQL Server\MSSQL10_50.VIM_SQLEXP\MSSQL\DATA\VIM_VCDB.mdf. The path may differ based on whether you have a 32 or 64-bit SQL installation, and what you named the file, but that's where mine is. That MDF file is your database (the LDF file with the same base name is your Transaction Log file, just FYI). So, you don't want this file to get near 4GB. I have a sensor in PRTG (my monitoring software) that keeps track of the file size, so I can look at trends or whatnot. It would also be fairly easy to throw something together using powershell to alert me if the file grows over a certain threshold.

What I would also do, however, is go into vCenter, click Home-->vCenter Server Settings and adjust the Database Retention Policy to something smaller. I believe the boxes are unchecked by default, so there is no retention policy, meaning that it saves everything forever. See this picture:

Your needs might be different - I have VeeamONE monitoring my stuff, so I elected for 30 days.

Wednesday, October 23, 2013

SQL Data Drive Almost Full? Why is MSDB GIGANTIC!!!!????

I came in a few weeks ago to this potential catastrophe. One of my SQL servers had only a few gigs of space left on its data drive, so I opened a timeline graph to see if we were just running out of room because of normal data growth, or whether there was an issue. There was an issue. The size of my MSDB file was 15.5 GB! I hunted around on the net for answers to how this could possibly happen, and in the end I fixed the issue without the server running out of space. What was happening was that there was a reindexing maintenance plan that stalled out and was just stuck writing to MSDB.

I also feel the need to say that I'm a noob accidental DBA. What that means is that I know the most about SQL in my workplace, so I'm automatically in charge of SQL. I've learned a lot, but I totally Google-Fu'd my way through this problem.

Here's what I ended up doing to fix the issue:

1. I found that the culprit was backed up messages in the sysxmitqueue by running this script:

SELECT object_name(i.object_id)
as objectName,

i.[name] as indexName,

Sum(a.total_pages) as totalPages,

sum(a.used_pages) as usedPages,

sum(a.data_pages) as dataPages,

(sum(a.total_pages) * 8 ) / 1024 as totalSpaceMB,
(sum( a.used_pages)* 8 ) / 1024 as usedSpaceMB,
(sum(a.data_pages)* 8 ) / 1024 as dataSpaceMB

sys.indexes i

JOIN sys.partitions p

ON i.object_id= p.object_id
AND i.index_id= p.index_id

JOIN sys.allocation_units a

ON p.partition_id= a.container_id

BY i.object_id, i.index_id, i.[name]

BY sum(a.total_pages)


2. I stopped the maintenance job that was stuck.

3. I then ran
SELECT * from sys.transmission_queue

This gave me the conversation handle that was stuck (9A06F198-5008-E011-9526-0050569F69D3).

4. I then ran
End Conversation 9A06F198-5008-E011-9526-0050569F69D3 with cleanup

This stopped the conversation and purged the table in MSDB. It took around 20 minutes to run.

5. I kicked everyone off the system. A quick way to see connections to your SQL server is to run sp_who2. This lists all of the connections.

6. I stopped the SQL Server Agent Service

7. I ran the following SQL command:
Alter Database msdb 
Set New_Broker With Rollback Immediate

8. I right-clicked on the MSDB database, selected tasks, then shrink -> database

9. This reclaimed 15 of the 15.5 GB. Crisis averted!

Tuesday, October 22, 2013

What I've been up to for the past couple of months......

So if I have any regular readers, they may have noticed that I went through a two-month spell where I didn't post anything. I've been learning a ton of stuff, mainly revolving around security, and just went all out on that. I could have written lots of "What I learned today" posts, but I just haven't felt like writing, honestly.

It started when I went to GrrCon, a Security Conference in Grand Rapids, Michigan (USA) that really got my juices flowing. I came out of it wanting to learn so many things, and KNOWING that I needed to get more used to using Linux.

So, first order of business was to get Kali Linux, which used to be called BackTrack. I got an "old" laptop from work and loaded it up. I also got a promiscuous wifi card for sniffing wireless traffic. I am running a WPA2-PSK network at home, so I went through some tutorials on how to crack the wifi passwords, made a dummy password dictionary that contained my real password, and was able to crack my password.

Password cracking is something I have read a lot about lately, but haven't gotten around to; it's one thing on a very long list. I've read some really intriguing articles recently by Dan Goodwin over at Ars Technica. Here is the latest, but if you're interested in seeing how passwords are becoming more and more useless every day, you should look into older password-related articles on the site. There's some gold there!

The day before GrrCon, I attended a class on using MetaSploit to gain access to a vulnerable system. Basically, they gave us a Kali Linux VM and a Windows 2000 VM, and taught us how to use MetaSploit to root the Windows box. It was STUPID easy, and it gave me a whole new perspective on why those updates need to get out ASAP on Patch Tuesday every month. I was able to use MS08-067 to create a new administrator-level user on the Windows box in about 4 commands. Not exaggerating. That IE 0-day that was making headline over the past week? There was a Metasploit plugin available 7 days before the fix. Metasploit does some really amazing things once you have root access to a machine. In a few keystrokes you can start logging keystrokes. You can dump the SAM hash to a file and use another program to start cracking the hashes to get actual passwords. And more. Here! Offensive Security makes a free VM called MetaSploitable that you can practice on!

There really is a LOT of stuff out there for the aspiring "Hacker": is like YouTube, but focuses on security and has a lot of "How To" videos. is a great security news site that I've been reading daily.
Hack Forums is a really neat forum where you can get help, or just lurk to see what's possible.

I'm really excited about this stuff!

Monday, October 21, 2013

Documenting Scheduled Tasks for all of my Servers

Yeah I'm getting the itch to write more regularly. I've only had yesterday's post in about 2 months, and it's time to get some stuff out there!

Today, I'm going to talk about a report that I run every week that collects every scheduled task running on my servers, puts it all into an excel file, and emails it to me.

Now, because it uses Excel, this is a task that needs to be run manually. Also, before you run it you should have a C:\Temp folder, and a list of servers in C:\lists\TaskSched-servers.txt -- or you can change those lines, which are below the functions.

#First, here are the functions that deal with Excel:
Function Release-Ref ($ref) 
        [System.__ComObject]$ref) -gt 0)

Function ConvertCSV-ToExcel
    Converts one or more CSV files into an excel file.
    Converts one or more CSV files into an excel file. Each CSV file is imported into its own worksheet with the name of the
    file being the name of the worksheet.
  .PARAMETER inputfile
    Name of the CSV file being converted
  .PARAMETER output
    Name of the converted excel file
  Get-ChildItem *.csv | ConvertCSV-ToExcel -output 'report.xlsx'
  ConvertCSV-ToExcel -inputfile 'file.csv' -output 'report.xlsx'
  ConvertCSV-ToExcel -inputfile @("test1.csv","test2.csv") -output 'report.xlsx'
  Author: Boe Prox      
  Date Created: 01SEPT210      
  Last Modified:  
#Requires -version 2.0  
    SupportsShouldProcess = $True,
    ConfirmImpact = 'low',
DefaultParameterSetName = 'file'
Param (    
     HelpMessage="Name of CSV/s to import")]
     HelpMessage="Name of excel file output")]

Begin {     
    #Configure regular expression to match full path of each file
    [regex]$regex = "^\w\:\\"
    #Find the number of CSVs being imported
    $count = ($inputfile.count -1)
    #Create Excel Com Object
    $excel = new-object -com excel.application
    #Disable alerts
    $excel.DisplayAlerts = $False

    #Show Excel application
    $excel.Visible = $False

    #Add workbook
    $workbook = $excel.workbooks.Add()

    #Remove other worksheets
    #After the first worksheet is removed,the next one takes its place

    #Define initial worksheet number
    $i = 1

Process {
    ForEach ($input in $inputfile) {
        #If more than one file, create another worksheet for each file
        If ($i -gt 1) {
            $workbook.worksheets.Add() | Out-Null
        #Use the first worksheet in the workbook (also the newest created worksheet is always 1)
        $worksheet = $workbook.worksheets.Item(1)
        #Add name of CSV as worksheet name
        $ = "$((GCI $input).basename)"

        #Open the CSV file in Excel, must be converted into complete path if no already done
        If ($regex.ismatch($input)) {
            $tempcsv = $excel.Workbooks.Open($input) 
        ElseIf ($regex.ismatch("$($input.fullname)")) {
            $tempcsv = $excel.Workbooks.Open("$($input.fullname)") 
        Else {    
            $tempcsv = $excel.Workbooks.Open("$($pwd)\$input")      
        $tempsheet = $tempcsv.Worksheets.Item(1)
        #Copy contents of the CSV file
        $tempSheet.UsedRange.Copy() | Out-Null
        #Paste contents of CSV into existing workbook

        #Close temp workbook

        #Select all used cells
        $range = $worksheet.UsedRange

        #Autofit the columns
        $range.EntireColumn.Autofit() | out-null

End {
    #Save spreadsheet

    Write-Host -Fore Green "File saved to $output"

    #Close Excel

    #Release processes for Excel
    $a = Release-Ref($range)

#Now here's the meat and potatoes:
$servers = Get-Content \\server\c$\lists\TaskSched-servers.txt
Remove-Item "C:\Temp\Scheduled Tasks Documentation.csv" -ErrorAction SilentlyContinue
$TempFile = "C:\Temp\Scheduled Tasks Documentation.csv"
$Attachment = "C:\temp\Scheduled Tasks Documentation.xlsx"
$Tasks = @()

Foreach ($Computername in $Servers){
$schtask = (schtasks.exe /query /s $ComputerName /V /FO CSV | ConvertFrom-Csv)
$schtask = ($schtask | where {$_.Taskname -notlike "*\Microsoft*" -and $_.Taskname -notlike "Taskname"})
$schtask = ($schtask | where {$_."Run as User" -notlike "Network Service"})
if ($schtask){
foreach ($sch in $schtask){
$sch  | Get-Member -MemberType Properties | 
ForEach -Begin {$hash=@{}} -Process {
If ($WithSpace){
($hash.($_.Name)) = $sch.($_.Name)} #End If
Else {
($hash.($($_.Name).replace(" ",""))) = $sch.($_.Name)} #End Else
} -End {$Tasks += (New-Object -TypeName PSObject -Property $hash)} #End Foreach
} #End Foreach
} #End If
} #End Foreach
$Tasks | select Hostname, TaskName, ScheduledTaskState, Status, LastResult, RunasUser, TasktoRun, Comment, NextRunTime, LastRunTime, ScheduleType, StartTime, Months, Days, Repeat:Every | export-csv $tempfile

#This Removes the first line of the file, which is just junk
$x = get-content $tempfile
$x[1..$x.count] | set-content $Tempfile

#Use the Functions above to import the CSV and output an Excel file
ConvertCSV-ToExcel -inputfile $Tempfile -output $Attachment

#Email me the file
$To = ""
$From = ""
$Subject = "PS Report - Scheduled Tasks - Documentation Purposes"
$Body = "This is a list of scheduled tasks on all servers, to be used for documentation purposes"
$SMTPServer = ""
Send-Mailmessage -to $To -Subject $subject -From $From -body $body -smtpserver $SMTPServer -attachments $Attachment

#Delete the Temp Files
remove-item $Attachment -force
Remove-Item $Tempfile

Sunday, October 20, 2013

Checking for Running SureBackup Labs

I created this script to notify me (remind me) that a SureBackup Lab was still running.

On key thing to have in place is a consistent naming convention. My Veeam Surebackup lab jobs all follow this one: "Server_LAB".

This runs at 4pm every weekday:

#Add Veeam Powershell Snap-In
Add-PSSnapin VeeamPSSnapin

#Email Variables
$To = ""
$From = ""
$SMTPServer = ""

#Here I get all of the running lab jobs (using my naming convention)
$SBJobs = (Get-vsbjob | where {$ -like "*LAB*"})

#Foreach one, get the job's state, then email me if that state is "Working"
Foreach ($SBJob in $SBJobs){
$SBJobState = ((Get-VSBJob -name ($
If ($SBJobState -eq "Working"){
$SBJobName = ($SBJob.Name)
$Body = "SureBackup Lab $SBJobName is Running"
$Subject = "SureBackup Lab $SBJobName is Running"
Send-Mailmessage -to $To -Subject $subject -From $From -body $body -smtpserver $SMTPServer
} # End If
} # End Foreach

Friday, August 23, 2013

Daily Comprehensive DCDIAG on my Main Domain Controller

This script had interest from an enterprising gent on a previous blog post that outlined what kind of tasks I've automated and reporting I gather from my environment.

Every morning at 6AM, my primary domain controller runs a comprehensive DCDIAG, and sends it to me. I use select-string to search the results of the DCDIAG output for the string "Failed". It's not foolproof; sometimes I do get failure emails (most of the time it's laggy replication caused by Veeam taking a backup snapshot). But I'd rather have advanced warning that something might be wrong with the most important thing in my environment: Active Directory.

As has become my custom, the comments do the talking from here on out.

#Specify some variables: the output file, what I'll search for, and email settings.
$TempFile = "C:\Temp\DCDiag_Temp.txt"
$SearchText = "Failed"
$SMTP = ""
$To = ""
$From = ""

#Run the DCDIAG with the following switches: Comprehensive, Enterprise (runs against all DCs)
#and verbose, outputting to file
DCDiag /c /e /v > $TempFile

#Read the File
$DCDiag = (Get-Content $TempFile)

#Look for the string, and count how many time it appears, then convert it to a string
#Then take out some newline characters
$FailCount = (($DCDiag | select-string -simple $SearchText | measure-object).count | out-string)
$FailCountString = ($FailCount | out-string)
$FailCountString = ($FailCountString.replace("`r`n",""))

#Format the email, placing a count of the term "Failed" in the subject
$Subject = "PS Report - DCDiag Error Report - $FailCountString Errors"
$Body = "$FailCountString Errors"

#Send me an email
Send-Mailmessage -from $From -to $To -subject $Subject -smtpserver $SMTP -body $Body -Attachments $TempFile

#Delete the temp file
Remove-Item $TempFile

Normally, I only send the email if there's something to report, but I send this one every day for two reasons. One, I like knowing that it's running, and two, it gives me a warm fuzzy feeling seeing that AD is healthy (almost) every morning.

Monday, August 19, 2013

Excel: Convert an Entire Column From Kilobytes to Megabytes

I'm always making reports in excel, usually when outputting to CSV. Until now, I would go and look up how to use Powershell to convert this, but sometimes I just need to throw something together quickly, and this is what I learned to do in Excel to convert a whole column from one thing to another:

I'm running Office 2010, by the way.

Put the number 1024 into a cell that's not in the column you're going to convert.
Now, copy that one cell.
Select the column that you want to convert.
Right-click, select Paste Special, Paste Special AGAIN, then "Divide".

Looking at the menu where you choose divide, you'll notice other mathematical operation. Sure enough, you can use this same method to add, subtract, or multiply an entire selection of values by whatever number you have copied.

Wednesday, August 14, 2013

Database Owner Unknown

I've been changing all of my MS SQL databases to the Simple recovery model, so I don't have to worry about transaction log backups and truncation. I only left really important ones that need point-in-time rollback available on full. While trying to change one of my 'ReportServer' databases to simple, I was not able to right-click on it in SQL Server Management Studio (SSMS) and select properties. I always got the following error:

Cannot show requested dialog. (SqlMgmt)
Property Owner is not available for Database '[DBName]'. This property may not exist for this object, or may not be retrievable due to insufficient access rights.

Now, I was logged in as the 'sa' account, so "insufficient access" isn't the problem. The problem was that the database in question HAD NO OWNER!!!

I verified this by running the following query against the master database:
sp_helpdb ReportServer

The runs a stored procedure (the 'sp' part) called 'helpdb' which displays info about the database you choose (ReportServer). This query returned a table of info, and the owner field said '~~~UNKNOWN~~~'. Not so good.

The fix was simple - use another stored procedure:
sp_changedbowner 'sa'

The query ran successfully, and I was able to then access the properties of the ReportServer database and change the recovery model.

Thursday, July 11, 2013

Gathering my Log Files. Every. Day.

This post relates to item #1 on my "What I'm Monitoring" post.

This script is set to execute as a scheduled task on each one of my Windows servers; I've even made it part of my provisioning checklist.

I'll allow my commenting within the script to do the 'splaining:


#Get yesterday's date
$Date = ((get-date).adddays(-1))

#Create formatting elements for the email
$SystemHeader = "`r`n+++   System Log Errors and Warnings    +++"
$ApplicationHeader = "`r`n+++ Applications Log Errors and Warnings  +++"
$DelimiterLog = "`r`n++++++++++++++++++++++++++++++++++++++++++++"
$Newline = "`r`n"

#Get the hostname
$Hostname = ($env:computername)

#Retrieve and filter the system log - Whenever I get an error that's 'ok' and that I don't care to 
#see anymore, I will add it to this section. This procedure also applies to the next section.
$System = (get-eventlog system -after $Date -EntryType Error,Warning | where {$_.eventid -ne 1111 -and $_.eventid -ne 3 -and $_.eventid -ne 4 -and $_.eventid -ne 8 -and $_.eventid -ne 1109} | select Entrytype, Source, EventID, Message, TimeGenerated | format-list)

#Retrieve and filter the application log
$Application = (get-eventlog application -after $Date -EntryType Error,Warning | where {$_.eventid -ne 1530 -and $_.eventid -ne 1524 -and $_.eventid -ne 1517 -and $_.eventid -ne 12321 -and $_.eventid -ne 1008 -and $_.eventid -ne 2003 -and $_.eventid -ne 100 -and $_.eventid -ne 1023} | select Entrytype, Source, EventID, Message, TimeGenerated | format-list)

#Build the email
$Body = $DelimiterLog
$Body += $SystemHeader
$body += $DelimiterLog
$body += ($System | Out-string)
$body += $Newline
$body += $Newline
$body += $DelimiterLog
$body += $ApplicationHeader
$body += $DelimiterLog
$body += ($Application | Out-string)

#Send me an email of any errors, but ONLY send the email IF there are errors!
If ((($system | Measure-Object).count) -gt 0 -or (($application | Measure-Object).count) -gt 0){
Send-Mailmessage -from "" -to "" -subject "Log Errors and Warnings from $Hostname" -smtpserver SMTPServerName -body $body
} #End If


What you see above is the one that script that runs on 90% of my servers. However, some of my servers end up needing their own file due to differences in the filtering section (where I'm excluding certain event IDs from getting through). Examples include my Exchange server and my IIS web servers.

Wednesday, July 10, 2013

What I'm Monitoring and Automating

Now that I'm pretty settled in my new job, I thought it might be helpful to step back and take a look at what I've automated and what I'm monitoring, and why. This will also serve as a roadmap for future content, because you all know how I love to post scripts.

  1. GatherLogs: Every server I have runs a task in the morning that sends me any warning or error entries in the system or application logs. Of course, Windows error logs have a lot of errors that just occur and really don't mean anything. Therefore, my script weeds these entries out. Previous Article.
  2. SQLExpBkup: Servers running SQL Express run a job to back up their databases locally prior to the network backup jobs. Previous Article.
  3. Off-site backups: I mirror quite a bit (using robocopy, mainly) to offsite repositories:
    • Some SMB shares
    • Backup Exec B2D files
    • Veeam Backup Files
    • The content of our IT Fileshare (ISOs, installation files, etc)
    • The contents of our departmental Sharepoint Knowledge Base (a Document Library). We don't want to be without our documentation. Previous Article
  4. SureBackup Open Lab Check: This script checks to see if I have any open Veeam SureBackup virtual labs running before I leave for the day. Previous Article
  5. VMware snapshot check: runs first thing in the morning to notify me of any snapshots that exist in my VMware environment. Article Here
  6. Automatic reporting on my Dell Equallogic SAN replication jobs. EDIT: I'm not repicating with Dell's stuff now. For our needs, doing Veeam BackupCopy was good enough.
  7. Weekend Veeam SureBackup job chain. Previous Article.
  8. Bad Logins on my domain controllers. Previous articles: Part one and part two.
  9. Logon Counts on my domain controllers. EDIT: Not doing this anymore. I had nothing actionable from the data and it takes a lot of CPU to comb through the event logs.
  10. Daily comprehensive DCDIAG on my main domain controller. Previous Article.
  11. Script that checks for file names that excees the NTFS limits on my file servers. Previous Article.
  12. An emailed list of any PST files on my file server that houses my users' home directories. We're still trying to get rid of these things....
  13. SQL Database mid-day backup, copy to our testing SQL server, mount and DBCC check. Previous Article.
  14. A script that sets up Spiceworks tickets for recurring daily, weekly, and monthly tasks. Article Here
  15. Automatically emailed Spiceworks reports. Previous Article.
  16. List of computers in AD that have not logged in for over 90 days. Previous Article.
  17. List of users in AD that are disabled. Article Here
  18. List of users' last login dates and times. Sorted by date and ascending, it shows me accounts that aren't being used. Previous Article.
  19. Scheduled Task Issues - stuck or failed tasks on any server. Article Here
  20. Sensitive Group Audit - who is a member of the Domain Admins group or other powerful groups  in your org? Article Here
  21. Website Monitoring - there are probably better ways to do this, but I have a powershell script that checks our websites that runs every 5 minutes or so. EDIT: I have PRTG do this now, but I modified my old Powershell script to ensure that PRTG is running. Monitoring the monitor!
  22. Documentation script that combs through my servers and makes a master list of all of these scheduled tasks - I'm going through it to make this list, as a matter of fact! Article Here
  23. Documentation script that gets drive space from all of my servers used/free, and dumps it into an Excel doc. I have PRTG, which tracks this over time on a per drive basis, but getting all of the data together so I can see per month/per year data growth was easier this way.
  24. Documentation script that goes through all of my SQL servers and creates the docs for them (Databases, recovery models, space usage, versions, etc) Article Here
  25. Script that looks for services that are using things like "Administrator". Still fixing those..... Article Here
  26. Mailbox size reports - who's hogging all of the space on our mail server!? Article Here
  27. I have a batch file that reboots this one server once a week. Our app person insists. In this day and age? The server runs 2008R2, so I don't see why, but whatever. Article Here
  28. WSUS: Auto-declining Itanium patches, Windows Embedded patches, and superceded Windows Defender updates.
  29. Notification of computers in the "Unassigned Computers" WSUS group. Previous Article.
  30. List of computers that are pending reboot or have updates pending. If the number is over ten I start cracking skulls (yeah right).
There. Not only is that a pretty good list of existing content, but a pretty good roadmap for me to follow to start posting some of the scripts I've created.

Monday, July 1, 2013

My Best Script So Far - SQL Backup, Remote Mount, and DBCC

NOTE: After some actual time with SQL, I understand that this is not the most efficient way to do this - you can accomplish most of what I've scripted with Maintenance plans. HOWEVER, I am posting this script anyway, because I'm proud of it, it works, and people might get helpful examples of certain routines out of it.

I've alluded to this script in a couple of posts. It's the script I'm most proud of. It was very difficult to do with my somewhat limited knowledge of SQL and Powershell at the time, it taught me a lot about both, and it runs multiple times per day on production systems with no issues.

The issue this script solves is as follows: I am an "Accidental DBA". Meaning, I'm the admin who knows the most about SQL (which isn't much), and we don't have the budget for an actual DBA, so I'm it. I backup my SQL servers every night using Veeam Enterprise AND Backup Exec 2010. Veeam is great and all, but Backup Exec can restore my database right back into my SQL server with little fuss; the process is much easier. The databases I inherited are using simple recovery mode, so I have no point-in-time recovery capabilities. I could switch them to use a Full Recovery model, but I don't know what all the ramifications of that change may be on the apps themselves (supported?), disk space, backup configuration changes, and then backup disk space. It occured to me that if we lost a database at 3pm, I would have to restore from the previous evening, negating all work done throughout the day. No good. I needed a mid-day backup to mitigate this loss.

Like I said, this is probably not the best way to do this, but at the time (with no SQL training, or time to partake in it) it was the best I could come up with.

An outline of the process:
  1. Backup a database
  2. Copy the database to our testing SQL Server
  3. Mount the database
  4. Run a Database Consistency Check (DBCC from here on)
  5. Detach the database
  6. Keep the backup copy until next time
  7. Email me the DBCC report
A couple of notes. 
  • Powershell SQL stuff (and this script) doesn't work on SQL 2005.
  • You need to have the SQL 2008 R2 Studio Management Console (2008 might work, didn't try) installed on the system you run this script from.
The players:
  • SQLTEST - This is the testing SQL Server
  • SQLPROD - This is the production SQL Server
The script, in its entirety, will be posted without edits (save anonymizing my server names) between the two horizontal lines below. I've documented the script very well, so just read the comments withing the script for explanations.

Ugh, I'm not going to bother with the formatting either. This script is a beast, and apparently Blogger doesn't like pasting from Notepad++. Here's a link to the script, so you can download it from my Google Drive (no login required).

Here it is:

# 1. Select all code between the two lines that start with # ADDING A NEW DATABASE, and copy it to your clipboard
# 2. Paste the code you copied at the end of the file, between the last # END DATABASE X and # ADD ANY NEW DATABASES ABOVE THIS LINE - THIS SECTION IS THE END
# 3. Renumber the Database Heading (within the * box), also edit the following variables (search for them within your section) to reflect the database number:
# $StartTimeDatabase# (Two Instances)
# $EndTimeDatabase# (Two Instances)
# $ReportForDatabase#
# $srv# (Two Instances)
# 4. Log in to the production database server that houses the database we want to schedule backups and consistency checks for
# 5. Ensure that the location C:\Backups exists (If you use a different drive letter, please edit the appropriate variables)
# 6. Open SQL Server Management Studio, logging in as someone with administrtive rights to all databases
# 7. Right Click the chosen database, choose "Tasks", then "Backup"
# 8. Modify Backup Set section "Name" to match the Database Name
# 9. Add the Destination to c:\backups\<DatabaseName.bak>, then remove the other location
# 10. Click Drop-down arrow next to "Script" at the top and choose "Script Action to new Query Window"
# 11. Close the backup windows by pressing the 'X' in the upper right hand corner (we will not actually run the backup job)
# 13. Referring to the key below on REFERENCE LINE:,
#     analyze the SQL Query syntax of the backup job to assign the appropriate values to EDITING SECTION within your database's code section
#     that you pasted in earlier at the end of the script. You need four values: $DatabaseName, $SourceBackupFilePath, $BackupFileName, $BackupName
#                                 [$DatabaseName]       '$SourceBackupFilePath\$BackupFileName'                  N'$BackupName'

# Add Required Powershell Snapins to work with SQL
add-pssnapin SqlServerCmdletSnapin100
add-pssnapin SqlServerProviderSnapin100

# Switch to root directory
cd c:
cd \

# Declare Constant Variables
$date = (get-date -Format M/d/yyyy)
$newline = "`r`n"
$dividerline = "==============================================================================="

# Variables dealing with the testing database server
$TestingDBServer = "SQLTEST"
$TestingBackupFilePath = "D:\Databases\Backups"
$TestingBackupFileUNC = "d$\Databases\Backups"
$TestingServerSQLDataFolder = "D:\Databases\Backups"
$TestingServerSQLDataFolderUNC = "d$\Databases\Backups"
$TimestampFolder = (get-date -uformat "%H%M-%m%d%Y")
$ArchiveStoragePath = "\\$TestingDBServer\$TestingBackupFileUNC\$Timestampfolder"
new-item -Path $ArchiveStoragePath -ItemType directory | out-null
$Pattern = "..errors and repaired ........"
$InsertString = "Found"


#*   *
#* DATABASE 1   *
#*   *

# Declare DB Specific Variables
$StartTimeDatabase1 = (get-date) #Change the number to correspond to the Database # in the * Box above
$SourceDBServer = "SQLPROD\MSSQLSERVER" #The Hostname\Instance of the production database server
$DatabaseName = "DB1" #Refer to the key
$SourceBackupFilePath = "C:\Backups" #Refer to the key
$SourceBackupFileUNC = "c$\Backups" #This is the same as $SourceBackupFilePath, but replace the colon with a dollar sign
$BackupFileName = "DB1.bak" #Refer to the key
$BackupName = "DB1" #This is the same as your database name, but refer to the key

# Automatically Built Variables
$SourceBackupFullPathAndName = ($SourceBackupFilePath + "\" + $BackupFileName)
$TestingBackupFullPathAndName = ($TestingBackupFilePath + "\" + $BackupFileName)

# Backup Database
Invoke-SqlCmd -Query "BACKUP DATABASE [$DatabaseName] TO  DISK = N'$SourceBackupFullPathAndName' WITH NOFORMAT, INIT,  `
NAME = N'$BackupName', SKIP, NOREWIND, NOUNLOAD,  STATS = 10" -ServerInstance $SourceDBServer -QueryTimeout 65535

# MOVE (not copy!) the backup file to the Testing Server
move-item "\\SQLPROD\$SourceBackupFileUNC\$BackupFileName" -Destination "\\$TestingDBServer\$TestingBackupFileUNC"

# Restore Database onto the Testing Server
Invoke-SqlCmd -Query "RESTORE DATABASE [$DatabaseName] FROM  DISK = N'$TestingBackupFullPathAndName' WITH  FILE = 1,  NOUNLOAD,  STATS = 10" `
-ServerInstance "$TestingDBServer" -QueryTimeout 65535

# DBCC Check
Invoke-SqlCmd -Query "DBCC CHECKDB ($DatabaseName) WITH ALL_ERRORMSGS" -ServerInstance "$TestingDBServer" -QueryTimeout 65535

# Get DBCC Results from SQL Log and format them
[system.reflection.assembly]::LoadWithPartialName('Microsoft.SQLServer.Smo') | out-null
$srv1 = new-object ('Microsoft.SQLServer.Management.Smo.Server') $TestingDBServer
$Result = ($srv1.readerrorlog(0) | where {$_.text -like "*DBCC*"} | select text -last 1)
$Result -match $Pattern | out-null
$DBCCOutputText = $matches[0]
$DBCCOutputText = ($InsertString,$DBCCOutputText -join " ")

# Delete Database
$DeleteDatabaseQuery = "EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'$DatabaseName'
USE [master]
DROP DATABASE [$DatabaseName]
Invoke-SqlCmd -Query $DeleteDatabaseQuery -ServerInstance "$TestingDBServer" -QueryTimeout 65535

# Move the BAK files that you used for the restores to a timestamped folder within the backup area
# These folders will be deleted after their creation times are over 24 hours
move-item "\\$TestingDBServer\$TestingBackupFileUNC\*.bak" -destination $ArchiveStoragePath

# This section creates the report for this database
$EndTimeDatabase1 = (get-date)
$ReportForDatabase1 = (
"Date: "+$Date+$newline+`
"Database Name: "+$DatabaseName+$newline+`
"DBCC Output: "+$newline+$DBCCOutputText+$newline+`
"Start Time: "+$StartTimeDatabase1+$newline+`
"End Time: "+$EndTimeDatabase1+$newline+`

#*   *
#*   *



#* *
#* *

# Build the master report
$MasterReport = ("Beginning"+$newline+$dividerline+$newline+$ReportForDatabase1+$newline) #When adding a new database, append +$ReportForDatabase#+$newline just inside the end parenthesis

#This section counts instances of the $SearchFor text, and if it equals 1 (the number of databases checked) then the body of the email will begin with "DBCC Check Successful". This allows for an Outlook filtering rule.
$Content = $MasterReport
$SearchFor = 'Found 0 errors and repaired 0 errors'
$MatchedInstances = ($Content | Select-String $SearchFor -AllMatches | Select -Expand Matches | Select -Expand Value)
If ($MatchedInstances.count -eq 1){
$MasterReport = ("DBCC Check Successful"+$MasterReport)
} #End If

# For each folder in the Archive area on the testing server, get the creationtime and delete anything older than 24 hours
$Folders = (Get-childitem "\\$TestingDBServer\$TestingBackupFileUNC")
Foreach ($folder in $folders){
$FolderAge = (((Get-Date) - $folder.creationtime).totalhours)
If ($FolderAge -gt 24)
{remove-item $Folder.FullName -recurse}

# Section for emailing of the report
Send-Mailmessage -from -to, -subject "DB1 Database Consistency Check for $date" -smtpserver -body $MasterReport -bodyasHTML

Friday, June 28, 2013

Running a Powershell Script on a Remote Computer From a Local Powershell Script

Let me tell you about why I needed this:

I will preface this by saying that I cite a use case which includes Veeam, but it's just an app that creates files, and your use case could include other files.

I have a server named FileServerPROD that serves as my Veeam Repository. At the end of my backup job, I have a robocopy process that synchronizes all of the files on FileServerPROD with FileServerDR at another site. These copy jobs take several hours at night, and an analysis posed the question, "What if something bad happens in the middle of the robocopy job?"

The answer is "Your offiste backup copies are toast, that's what." Uh.... unacceptable.

I came up with a process that creates a "Yesterday" folder and creates a copy of the backup files BEFORE my robocopy process starts its thing. This way, I keep a copy of my old files that is untainted should the robocopy go south. The problem I encountered was that if I tried to do this from my side of the WAN, the copied files would have to come all the way to the computer that the copy was initiated from (across the WAN) and then BACK to the destination - across the WAN again! I needed to initialize the copy operations from FileServerDR so the data would all stay local (and speedy).

Powershell isn't designed to easily launch Powershell scripts remotely from inside of other Powershell scripts. Obviously, the ability would be a security concern. So I cobbled together this workaround.

First off, here's the script that runs at the end of the Veeam job and lives on FileServerPROD, which I will dissect below. I've made the most pertinent code red:

$starttime = ((Get-Date)|out-string)

#Execute Remote Powershell script to move the current backup files into a "Yesterday" folder and make new copies to run the mirror operation on
$result=$process.Create("powershell.exe -File C:\ps\Daily-Apps.ps1")
$ProcessID = $result.processID

#Make sure the process has finished running its course before initiating the robocopy job

#Only when the process initiated above completes while $Running be $null, and the loop end
$Running = 1
While ($Running -ne $null){
Start-Sleep -seconds 60 #wait 1 minutes
$Running = (get-process -id $ProcessID -ComputerName $Server -ea 0) #Check the process again

$YesterdayCreated = ((Get-Date)|out-string)
#The section below copies the Veeam Jobs, and then sends a report
start-process "c:\robocopy.exe" -ArgumentList 'R:\Backups\Daily-Apps \\FileServerDR\R$\Daily-Apps /MIR /R:1 /W:5 /NP /LOG:C:\Logs\VeeamRobocopy-Daily-Apps.log' -wait

$endtime = ((Get-Date)|out-string)
$To = ""
$From = ""
$Body = (("Start Time: $starttime")+("Yesterday Folder Created: $YesterdayCreated")+("End Time: $endtime"))
$Subject = "Robocopy Offsite - Veeam Daily-Apps - Job Results"
$SMTPServer = ""
$Attachment = "C:\Logs\VeeamRobocopy-Daily-Apps.log"
Send-Mailmessage -to $To -Subject $subject -From $From -body $body -smtpserver $SMTPServer -attachments $Attachment

And here's the script that runs on FileServerDR:

#Remove the current "Yesterday" folder
remove-Item R:\Daily-Apps-Yesterday -Recurse -Force -ea 0

#Create New Folder
mkdir R:\Daily-Apps-Yesterday

#Copy contents of the old folder to the new folder
copy-Item R:\Daily-Apps\*.* R:\Daily-Apps-Yesterday

Now, let's go through the FileServerPROD script:

$starttime = ((Get-Date)|out-string)
This gets a time marker for information purposes.

#Execute Remote Powershell script to move the current backup files into a "Yesterday" folder and make new copies to run the mirror operation on
$result=$process.Create("powershell.exe -File C:\ps\Daily-Apps.ps1")
$ProcessID = $result.processID
The code above launches a Powershell process on CITYFPDR which runs the script remotely. The last line gets the process ID, which we will use next.

#Make sure the process has finished running its course before initiating the robocopy job
#Only when the process initiated above completes while $Running be $null, and the loop end
$Running = 1
While ($Running -ne $null){
Start-Sleep -seconds 60 #wait 1 minutes
$Running = (get-process -id $ProcessID -ComputerName $Server -ea 0) #Check the process again
THIS IS THE KEY! This While loop puts the script on pause until the remote script finishes by checking if the process ID exists. If the ProcessID doesn't exist (is finished) the the get-process query will return $null, thus ending the loop.

$YesterdayCreated = ((Get-Date)|out-string)
#The section below copies the Veeam Jobs, and then sends a report
start-process "c:\robocopy.exe" -ArgumentList 'R:\Backups\Daily-Apps \\FileServerDR\R$\Daily-Apps /MIR /R:1 /W:5 /NP /LOG:C:\Logs\VeeamRobocopy-Daily-Apps.log' -wait
Here we get another time marker and the script starts the robocopy job.

$endtime = ((Get-Date)|out-string)
$To = ""
$From = ""
$Body = (("Start Time: $starttime")+("Yesterday Folder Created: $YesterdayCreated")+("End Time: $endtime"))
$Subject = "Robocopy Offsite - Veeam Daily-Apps - Job Results"
$SMTPServer = ""
$Attachment = "C:\Logs\VeeamRobocopy-Daily-Apps.log"
Send-Mailmessage -to $To -Subject $subject -From $From -body $body -smtpserver $SMTPServer -attachments $Attachment
And here we get an "end time" marker, then an email is sent with all of the time markers we've gathered along with the robocopy log as an attachment.

Here's the dissection of the script that runs on the other side, creating that "Yesterday" folder and a safe copy of my data:

#Remove the current "Yesterday" folder
remove-Item R:\Daily-Apps-Yesterday -Recurse -Force -ea 0
This resets the yesterday folder

#Create New Folder
mkdir R:\Daily-Apps-Yesterday
This makes a new yesterday folder

#Copy contents of the old folder to the new folder
copy-Item R:\Daily-Apps\*.* R:\Daily-Apps-Yesterday
Like it says, this copies the contents over