Click an Ad

If you find this blog helpful, please support me by clicking an ad!

Monday, December 24, 2012

Sometimes, Powershell ISN'T the answer!

We're looking around at SAN's, because our SAN is just out of headroom performance-wise. We have oodles of space but not enough IO. One idea put forth was to shut down VMs when they aren't doing anything and then power them on well before they're needed again. "I can totally handle this", I thought, with the command shutdown-vmguest already bouncing around in my head. Shutdown-vmguest is a VMware PowerCLI command that.... shuts down guest VMs. I love how command names are so intuitive. The verb-noun system is really great.

So the next issue with creating a PowerCLI script that accomplishes this is passing credentials. I hate working with credentials in Powershell, I really do. Then, I remember that vCenter can schedule tasks on its own! I poke around in the interface, because I pretty much hang out in Hosts and Clusters all the time, with only a brief foray into the Datastores section. Lo and Behold, there are Scheduled Tasks! And setting it up was stupid easy!

The moral of the story is that sure, Powershell can do everything (yes, including your dishes). It's easy to go to it for everything, but even with its seemingly unlimited OneRing-like power, you STILL need to step back and evaluate the best tool for the job.

Happy Holidays everyone!

Saturday, December 22, 2012

Restore Deleted Items from a Public Folder

An employee exiting the organization decided to "clean up" some files they thought that no one was using. So they deleted a bunch of stuff in some public folders. Yes, we are running Exchange 2010 SP1 with fricking public folders still. Maybe next year we'll get a new fax solution that works differently, but for now we have what we have.

I found a great utility to restore files deleted from public folders and it worked great. The program is stupid easy to use, but after extraction you must follow the instructions in the readme.txt to get it to work. I'm not even going to explain how to use it.

It's called ExFolders and can be downloaded from Microsoft Technet here. You can read some in-depth analysis regarding it at the Exchange Team Blog here.

Thursday, December 20, 2012

Failed Login Attempts - The Second Half

So I got all giddy regarding my creation of a script that would email me the previous day's failed logins, and blogged about it before real world testing had occurred  The results were that ElDump only works with Windows 2003. I tried to get a good get-eventlog dump out of my 2008R2 domain controllers for quite some time. A couple of observations: Why does it take so long to get-eventlog remotely? Also, why don't they split up sections in an event's message property to be more accessible? Perhaps every hard-return in the message field could be delineate another element in an array? But, I digress.....

In the end, it was a post I found on the Spiceworks Community (GREAT resource by the way) that gave me what I needed. The following script builds on what I found in the original post. So, a big shout-out to B-Rad2011. Ninety percent of this is his, but I will take credit for adding a column to the output telling which hostname the user failed to log in from instead of only giving the IP address. I learned how to do reverse DNS lookups here.

#Here we flesh out some variables
$Date= Get-date      
$DC= ""
$Report= "c:\temp\report.html"

#Here we create a web template
<title>Event Logs Report</title>
BODY{background-color :#FFFFF}
TABLE{Border-width:thin;border-style: solid;border-color:Black;border-collapse: collapse;}
TH{border-width: 1px;padding: 1px;border-style: solid;border-color: black;background-color: ThreeDShadow}
TD{border-width: 1px;padding: 0px;border-style: solid;border-color: black;background-color: Transparent}

#Get the event log, then extract some properties
$eventsDC = Get-Eventlog security -Computer $DC -InstanceId 4771 -After (Get-Date).AddDays(-1) |
   Select TimeGenerated,ReplacementStrings |
   % {
   $IPAddress = (($_.ReplacementStrings[6]).Remove(0,7))
   $Hostname = ([System.Net.Dns]::GetHostByAddress($IPAddress) | select Hostname)
   $hostname = (($Hostname.hostname).replace("",""))
   New-Object PSObject -Property @{
     UserName = $_.ReplacementStrings[0]
            Source_Computer = $hostname
            IP_Address = (($_.ReplacementStrings[6]).Remove(0,7))
            Date = $_.TimeGenerated
    } #End NewObject -Property
   } #End Foreach

#Inject the object created above into an HTML page
$eventsDC | ConvertTo-Html -Property Date,Source_Computer,IP_Address,UserName -head $HTML -body "<H2>Generated On $Date</H2>"| Out-File $Report -Append

#Mail the page, and then delete the original
$Text = "Password Failures from $DC"
Send-Mailmessage -from "" -to -subject $Text -smtpserver MailServer01 -body $Text -attachments $Report
del $report

Sunday, December 16, 2012

WSUS - Who approved THAT update?

Stumbled across a neat little tool last week. This tools goes through your WSUS log file to tell you who approved a certain update. It's called the WSUS Approval History Log.

Friday, December 14, 2012

Checking for Failed Login Attempts Due to Incorrect Passwords


 So we got hit with some virus this week, and it really put security at the top of our list again. A couple of things happened as a result of this. First and foremost, the virus spread because someone hadn't rolled out a patch from the WSUS server. I ran Microsoft's Baseline Security Analyzer against the machines in our site and it came up clean after I rolled out that patch.

We had been toying with the idea of what to do about Java, Adobe Reader, and Adobe Flash patches for some time, and I ran across a really nice product by Dameware (now owned by Solarwinds) called Dameware Patch Manager, and since we had some money to spend before the end of the year I was able to buy it. The nice thing about this is that it pushes patches out using your existing WSUS infrastructure, so we'll be getting the (seemingly) hundreds of different versions of Java et al patched up to current. That'll negate some other nasty attack vectors. It also ties into WSUS to give you a lot more reporting options. WSUS reports have always irritated me, and I hope that when I get this deployed early next year that I'll be able to clean that up. I'll keep you up to date. I looked at Ninite Pro, but it didn't give me the side-benefit of advanced WSUS reporting.

One of the actions of the virus was that it was trying to brute force a few accounts. I was able to trace the infection using the security log on our domain controllers, but there's got to be a better way! The best way is buying a SEIM device (Security Event and Incident Manager, which monitors logs and other things for security incidents and notifies you automatically) or software to monitor all of our logs. Since I really don't have the cash to get SEIM  up and running, I decided to write a Powershell script that would comb through the security logs on our domain controllers and report any bad password attempts.

I tried to pull this off entirely with Powershell, but working with the Windows event logs remotely from within Powershell was proving tedious and slow. I found an app called ElDump that will import the logs into a text file. The download like from the main site for ElDump (here) is broken, so I had to scour Google for it, but I found it. It's a marvelous little thing, really! Also, I tried to get one block of script working for all 4 of my domain controllers, but in the end I just copied and pasted the block I had working and changed the name of the target server and output file names because I had better things to do than write beautiful script.

Here it is. My explanations are in the comment lines (comment begin with a # symbol):

#Use ElDump to export and events in the security log with ID 675 in the past 24 hours from dc1 and redirect output to junk.txt
C:\eldump\eldump.exe -l security -e 675 -O dts -m Security -A 24 -s \\dc1 -M >> junk.txt

#Create Output file variable
$File = "C:\badlogins-DC1.txt"

#Grab the content, whittle it down to include only lines containing "0x18" (bad password) and output to file. I had to change the width so it wouldn't wrap.
Get-Content junk.txt | select-string -pattern "0x18" | out-file -width 140 $File

#Delete the first file, set a string variable for use with the email subject and body, email the file, and delete the output file
del junk.txt
$Text = "Password Failures from DC1"
Send-Mailmessage -from "" -to -subject $Text -smtpserver mailserver1 -body $Text -attachments $File
del $File

A side benefit to this script was that I found a long-forgotten scheduled task that couldn't run because the stored password was bad. Good times.

EDIT: For Windows 2008 (and up) servers, you need to change the ElDump command to return event 4771 instead of 675.

Tuesday, December 11, 2012

You put 15GB of files on my Fileserver yesterday. Really?

So one of the first things I implemented in this job was system monitoring. I absolutely love Paessler PRTG. Not only does it alert me when my stuff is down, but it tracks historical things, like disk use, for instance (I should also give props to Paessler for giving me an Android and IPhone app!).

Well, last week the free space threshold was broken on one of my servers. I loaded up the historical data and saw that 15GB worth of space disappeared in a very short amount of time. In my experience, when that much space goes away all of a sudden, it's always been something silly, like a DBA backing up an entire SQL database to their home drive, or someone downloading seasons of TV shows. I needed to identify what files were created so I could see whether the files were legit and who uploaded them so that I could read them the riot act. Powershell to the rescue!

get-childitem -recurse | where {$_.CreationTime -like "*11/20/2012*"} | select name, length, fullname | out-file c:\temp\15GBReally.txt

So here, I'm getting all files that were created on 11/20/2012, and I'm returning their names and their paths, which I'm then outputting to a text file.

I'm sad to report that the files were indeed legit, and I had to bottle my deep, deep sysadmin rage.

Tuesday, December 4, 2012

Changing the Edition of your Windows Install (DATACENTER LICENSES! WOOT!!)

We recently bought Datacenter licenses for each of our VMware hosts. This gives us the right to make as many Windows Server VMs as we want to on each host. How do we change the edition from Windows Server Standard (or Enterprise) to the Datacenter edition without rebuilding the server though? Here's how:

Open up an administrative command prompt
Type the command: DISM /online /Get-CurrentEdition
This will show you the current edition.
Now type: DISM /online /Get-TargetEditions
This will show you editions that you can change to, and more importantly, tell you how to identify what to type for the 'edition ID'next command:
DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

Here's the trick that stuck me the first time through. For the product key field in the previous command, DO NOT type in your actual Datacenter key. Instead, type in the generic KMS key that Microsoft provides. A list of them can be found here on Technet.

Now, the server will actually change the edition, and you will be prompted to reboot. 
After rebooting, open up computer properties and enter the key as you normally would (Start, right-click on My Computer, choose properties, scroll to the bottom, and select 'Change Key'). NOW enter your Datacenter key and activate Windows. 

A lot of the 'How-Tos' I ran across on the internet neglected to tell me to use the generic KMS key from Technet first, then use my key to activate after the reboot.

Tuesday, November 27, 2012

Veeam 6.5 - Using It and Loving It

I've been using Veeam Backup and Replication 6.5 for a couple of weeks now with no issues, having upgrade (again, with no issues) from version 6.1. I'm not doing replication or anything fancy, just backing up some VMs, but I like it. The two biggest new features for me are the additional job scheduling choice of "Start after this other job" (which is LONG overdue - and every application that can schedule more than one task should take a page from Veeam and do it like this - just beautiful!). The other great addition is that Veeam ONE (Veeam's monitoring product, which you should totally rush out and buy) now can give you some really interesting metrics from the Veeam Backup and Replication infrastructure, such as how much data your backing up (so you can track growth!) and when you're going to run out of space, given the current backup trend in your organization. I am seeing slight improved performance, but honestly the product is so amazing to begin with (compared to something like.... uh.. Backup Exec), that I don't even care.

Another cool thing I ran across is that Veeam offers individuals certified by Microsoft or VMware a free license for "lab" use of Veeam. You can check that out here. I didn't see where they actually checked for your cert, so, uh, there's that.

Tuesday, November 20, 2012

Organizing My Projects and Tasks

I've been struggling lately trying to find a good way to keep track of my projects. I want the ability to look back and see what I've accomplished, for resume purposes or just to remind myself of how I went about implementing something, but also as a to-do list to stay on track. Microsoft Project was a much steeper learning curve than I wanted - I am not a project manager. At my last job, the IT department shared a spreadsheet that we would pour over every week in an IT meeting. Every row would have a certain project, who was assigned to it, when it was due, and what had been done thus far towards completion. When the project was completed, there was a field to mark completion, and a macro would move it to the bottom of the list. This worked fine, but it was unwieldy to quickly find out what I needed to be working on. Prioritization. Recently, I went to an online service called This seems to be working ok, but when I complete a project there's no way to save the list for posterity. I've resorted to keeping a separate text file with projects I've completed, but it's not nearly as in-depth as I would like it to be, because I don't take the time to flesh out all of the nitty-gritty details of the sub-tasks, just the overall view. I guess I'm just sticking with until I find something better.

Monday, November 19, 2012

Purging deleted users from Public Folder ACLs (Delete NT User: Generic SIDs)

We are on Exchange 2010 and still have a ton of public folders (yes yes, we'd love them to die off, probably more than Microsoft, but what are you going to do?). We've been combing through all of our event logs, which had previously not bee done, and are resolving various error messages. One we ran across in our Exchange server's application log is:

Event ID: 2028
Transport Delivery MSExchange Public Store
The delivery of a message sent by public folder AFEFE2D3A4AAE242A27C26178911274C-000005387E74 has failed
To: Someuser

While investigating this, we found that a lot of our public folders had hanging SIDs, which are the "NT USER:S-1-5-93859384-1394871948 like entries you see on an ACL when that user has been deleted from Active Directory. 

Fixing all of these hanging SIDs from each public folder would be a nightmare if done manually (we have several hundred public folders). Powershell should be able to handle this! And did it ever. In one line:

get-publicfolder "\" -recurse -resultsize unlimited | get-publicfolderclientpermission | where {$_.user -like "NT User:S-1-*"} | % {remove-publicfolderclientpermission -identity $_.identity -user $_.user -access $_.accessrights -confirm:$true}

Monday, November 5, 2012

RIP Windows Media Center

If you've been keeping up with the new Windows 8 features, you might have read that Windows Media Center has been ripped out of the new OS. It is now available as a $39.99 add-in. Here is the Microsoft  site with instructions on how to add it, but you'll need a key. Also on this page, you can request a key, which will be free until January 31. You might scoff now, but I use Media Center to stream stuff to my XBox 360. Who know what it might do in the future? My advice: Get a key for free while you can. You probably won't use it, but who knows?

Monday, October 22, 2012

Powershell/Veeam Mixed Post

My retention periods are as follows:
Daily VMs: These VMs are servers that have data that changes often. Examples include File servers, SQL Servers, and Mail Servers. I keep four restore points, and these jobs run on M, T, W, and Th. The jobs are daisy-chained using a Powershell script like this:

Add-PsSnapIn VeeamPSSnapIn
get-vbrjob -name "Daily-Web" | start-vbrjob

Upon completion of the last job, the files that compose the Veeam backup job are copied over a WAN link using robocopy to a NAS that I recently acquired. I'm very happy with the NAS! It's an Iomega StorCenter px6-300d that I bought with no drives for $900. Then I bought 6 3TB SATA disks for another $900 or so and arranged them in RAID5. I have around 14TB in usable space for less than 2 grand!!!

Weekly VMs: These are application servers that very rarely change, like terminal servers and print servers. The weekly job also includes all VMs in my "Daily-VM" jobs, so I can save one copy of them every week. This allows me to have grandfather-father-son retention. I save 4 restore points for all of these VMs, and after the job runs I copy the files offsite to the NAS. The challenging part I ran into was working a way for my script to detect that it was the last Friday of the month, and then start the Backup Exec job to also write the Veeam backup files to tape. My first search turned up this Powershell function created by PoSH Pete  called LastXofMonth. I whittled the script down, because I only need to find the last friday, and came up with this:

#Get the date of the last Friday of this month
$Dayname = "Friday"
$LastDayOfMonth = (Get-Date -Year (Get-Date).Year -Month (Get-Date).Month -Day 1).AddMonths(1).AddDays(-1)
If($LastDayOfMonth.DayOfWeek -eq $DayName)
$Answer = $LastDayOfMonth
} #End If
Else {
While($Answer -eq $Null)
$LastDayOfMonth = $LastDayOfMonth.AddDays(-1)
If($LastDayOfMonth.DayOfWeek -eq $DayName)
$Answer = $LastDayOfMonth
} #End If
  }#End While
}#End Else

#Get Today's date and do some date formatting
$Date = ((get-date).ToShortDateString())
$LastFriday = ($

#If Today's date is past or equal to the date of the last friday, start the BE job that writes monthly Veeam backups to tape
If ($Date -ge $LastFriday){
start-process "C:\Program Files\Symantec\Backup Exec\bemcmd.exe" -ArgumentList '-o1 -jCITYBU01 - Veeam to Tape'
} #End If

Something I have forced myself to do with my scripts is to put comments at the end of any command blocks, like If, Else, While, etc. Anywhere that there's code within a set of curly braces that spans more than one line gets a comment. It make nested conditional statements much better to trace out and troubleshoot.
One nice thing I discovered when working with datetime datatypes in this script is that if you take two dates you can compare them. Well, I didn't discover it; it makes sense. I had never done it before and was pleasantly surprised that it worked like I thought it should.

[datetime]$date1 = "11/1/2012"
$date2 = ((get-date).ToShortDateString())
$date1 -gt $date 2

This evaluates as true, because November 1 is "greater than today, which is 10/22/2012.

Sunday, October 21, 2012

I Rooted my Phone and Found Some Great New Apps

So if you read my post from Saturday, I watched the guy next to me use an app called DroidSheep to hijack someone's Facebook connection. So far, I've avoided rooting my phone because it just works. BUT, this app got me to wondering what other apps were out there that would make things work better if only my phone was rooted. So, I rooted my phone. Besides the aforementioned DroidSheep app, I also downloaded and installed:

  • ShootMe (take screenshots just by shaking your phone)
  • Titanium Backup Root w/ Pro key (it backs up to DropBox, holy crap!). There's a how-to on Lifehacker for setting up Titanium to properly protect your phone here. It's a little dated now, but a little adaptation seems to have done the trick.
  • AdFree (redirects app requests to a curated list of IPs to

I also bought and installed Tasker and Locale, which will make my life MUCH easier by automating things (too bad this thing doesn't run Powershell amiright?). I've missed too many calls because my phone was left on the silent setting, and I can set it to put my phone in airplane mode when I go to the movies. Locale deals with these issues. Tasker looks dead useful, and there's a whole list of things from their wiki that I want to try.

Some more amazing apps I stumbled upon are:
SwipePad (Gesture driven pop-up menus - Just try it, I've been looking for something like this for a looooong time)
Multicon (let's you put four icons in a widget in the space of one icon)

Aaaand I'm also on LauncherPro, having moved from Holo Launcher. To make my home screens less busy, I've removed text labels from all of the icons on my home screen, which is taking some getting used to.

Saturday, October 20, 2012

SUMIT 2012 - Security at U of M IT

Yesterday was a lot of fun: I attended the SUMIT security conference at the University of Michigan. This is an annual conference, and every year I'm reinvigorated to learn more about security. The guy next to me was showing me Droidsheep on his Galaxy Tab, and successfully intercepted another attendee's Facebook connection over the open wireless. We made an innocuous post on his wall touting the conference.

Some interesting things I learned:

  • I NEED to start playing with the Backtrack Linux distro. 
  • I now have zero faith in the security of unencrypted files stored on computers that are connected to the internet. The hackers are too good, and too numerous. I wonder what will happen when no one can rely on proof of identity anymore? When everyone's identity is out there, how can any agreements be trusted that aren't made in person? I asked the group I was with that question, and they immediately said biometric devices, but that's just another digital system that can and will be manipulated.
  • On your network, you should block any outgoing UDP traffic where the sender's address is not within your network (in other words, spoofed). Evidently, this act is considered just being a good netizen. This prevents many different kinds of attacks that use spoofed UDP packets from being perpetrated from your network.
  • I never thought about it before, but I wonder what Google thinks of all of the insecure Android devices out in the public? Think about it: If you own an Android device (and you're not rooted) you don't get updates for Google's OS until your carrier releases them. I'm on Sprint and I've only had Ice Cream Sandwich for 2 months! I've now found a very wonderful reason to root my phone: security! Isn't that ironic?
  • There was a presenter from the ACLU speaking about how easy and pervasive wiretapping is now. Cellphone companies track your every movement, sure. We all know this. But do you know how long the different companies keep your data? AT&T is the worst offender at 3 years. How many requests from law enforcement were made last year? Something like 1.5 million!

Thursday, October 18, 2012

Nice Stopwatch Tool (online or off)

I've been doing a lot of real-time based testing lately for some odd reason. How fast does this file open? How long does it take to transfer a gigabyte file to that server across this WAN link? I was using my phone at first, but I wanted a desktop option and found a great site called You can use their many different timers right through the webpage, or download them and run them locally.

Yeah I'm reaching (not to detract from the utility of the online-stopwatch!). I've been pretty much building virtual machines and scoping out performance requirements for a new SAN we'll hopefully be ordering in Q1 2013. Pretty boring stuff.

Monday, October 15, 2012

How to Tell Windows to Ignore One of Your NICs

My testing computer runs VMware Workstation and has two network cards. One of the NICs is connected to my production network cards and is my main NIC. The other NIC is used to connect any virtual networks within my VMware Workstation environment to the internet if I need to. I didn't want the computer to send any traffic out to this testing NIC. Of course, disabling that NIC in Windows would have rendered it unusable to my VMware environment. Here's how I got around that:

  1. Opened the properties of my test NIC.
  2. Opened the Internet Protocol Version 4 (TCP/IPv4) properties.
  3. Click "Advanced"
  4. Uncheck the box that says "Automatic Metric" and enter a high number. I used 500.
  5. Hit OK multiple time to close the dialog boxes and apply the setting.
Now, when Windows needs a path, it will see the high metric and use the production NIC. My production VMs will only see the NIC I assign to to them (the test NIC with the high metric) and won't have any choice but to use the test interface for outgoing traffic.

Sunday, October 14, 2012

A New Remote Desktop Services Server! (Terminal Server)

Preface: I use the term Terminal Server and RDS Server pretty interchangeably. RDS Server is the new terminology, but using the old "Terminal Server" is a difficult habit to break.

A couple of weeks ago we had a request for six new workstations to be created for outside contractors to remote into. They already had 3 Windows 7 VMs for this purpose, and this was just too much. I had been advocating against Terminal Servers because I was in charge of them at my last job and hated (HATED) them, but I'm not running 9 Windows 7 VMs just to meet this goal; management simplicity, space, and all that, you know. After I built the new RDS Server I discovered that my loathing was really just targeted at roaming profiles, and not terminal servers, so I feel better about that. Thankfully, this RDS Server will be used by outside contractors, and I don't have to worry about redirecting folders or roaming folders.

I did run into a few frustrating problems, but I found the solutions scattered about the internet. First, I found a pretty good guide to locking down an RDS Server on Technet. One good thing about the article is that it talks about removing libraries, whereas other RDS lockdown articles I found were written for Windows Server 2003.

Issue number two was that I was having a hard time figuring out how to remove the Administrative Tools from my users' start menu. There wasn't a group policy that affected this, but I DID find a group policy preference!
  1. In your group policy, go to User Configuration > Preferences > Control Panel Settings > Start Menu.
  2. Right-click > New > Start menu (Windows Vista) and then browse till the Administrative tools and choose "Do not show this item".
Another issue (and most infuriating here) was that none of my icons were showing up on my users' desktops. Icons that you create in C:\Users\Public\Desktop (Windows Server 2008/R2/Vista/7) or C:\Documents and Settings\All Users\Desktop (Windows Server 2003/XP) should show up for everyone, and mine weren't because of a group policy that I had set called “Remove common program groups from Start Menu”. This can be found in "User Configuration > Policies > Administrative Templates > Start Menu and Taskbar", and has the unintended consequence of hiding icons on all users/public desktops. So, I set the policy to "Not Configured" and then removed the "Everyone" and "Domain Users" groups from the C:\ProgramData\Microsoft\Windows\Start Menu (Windows Server 2008) or C:\Documents and Settings\All Users\Start Menu (Windows Server 2003) folder permissions. You will need to remove inheritance to make this happen. 

Friday, October 12, 2012

Great (GREAT) Powershell cheat sheet

A cohort over on the Ars Technica forums (seriously great site and community, if you aren't a member already you should change that post-haste) created a great Powershell cheat sheet. I printed it out and hung it in my cubicle, and it's saved me a ton of time so far. I seemed to spend an inordinate amount of time using get-command because I didn't remember exactly what I was about, and this has cut that down quite a bit.

You can find Cookie Monster's Powershell cheat sheet here.

Wednesday, October 10, 2012

VMware Workstation - Can't take ownership/Windows 8 rant

I'm currently working on creating a standard image for my company's desktops, and I'm playing with a neat Linux-based imaging application called FOG. It really is a nice piece of software to use, after flailing my arms trying to make the Microsoft Deployment Toolkit to do what I wanted. Seriously, I just want to make an image and deploy it. Is there some reason Microsoft can't make a more streamlined approach for an IT shop that doesn't have someone completely dedicated to this project?

So I built a domain controller and configured my FOG server, then built an VM that I wanted to be a base image. I installed Windows 7, all of the updates and the service pack, then more updates (jeesh there's a lot of updates) and a few static pieces of software that our employees use. I got everything just so, and shut down the VM. I made a copy of the VM's folder so I could go back if I found there were things I forgot to do while in testing. Makes sense, right? I didn't want to go through the entire process again; the updates were brutal! I made another copy after I had sysprepped the Windows install. I recommend doing this because I needed to PXE boot the machine, and trying to boot to something other than the hard disk in a VM on an SSD is nearly impossible. Especially since you can't get focus on the VM inside of workstation until it starts booting, and you have 2 seconds before "Starting Windows" comes up. On this point, please VMware, give us somewhere to adjust the BIOS screen delay before boot!!!

Sure enough, I didn't get everything quite right the first time through. So I powered the VM down, copied the backup I'd made back over, and went to open the VM only to see a strange message stating that it looked like the VM was in use. The actual error states "This virtual machine appears to be in  use. If this virtual machine is already in use, press the Cancel button to avoid damaging it. If this virtual machine is not in use, press the take ownership button to obtain ownership." Taking ownership fails. Pressing cancel doesn't help you in your quandary  either. The fix is to look in the folder and delete any .LCK files you see. Then you can use the VM again.

Also in my virtual machine playground, I've been toying with Windows 8. Not a big fan. I can adapt and overcome, but I can just picture my users eyes glaze over as I tell them that to shut down the computer, they have to open up the charms bar. The fact that Microsoft is FORCING this UI change on everyone is ridiculous. Admins know how to manipulate just about everything via group policy, and the fact that they're not giving us control over whether our users' boot into the Metro screen or to the desktop is maddening. Actually, the fact that they're forcing everyone to make this transition is, but especially those of us who have help desks to run and need to manage (dictate, whatever) as much as we can. Also, the windows look blocky. Remember when Vista came out everyone hated the GUI and called it the Playschool GUI? Well, this actually looks like a Playschool GUI. The sleek edges are gone, and everything looks blocky. Like.... Legos. I hereby dub this the Lego interface.

On the other side of this coin, I'm really excited about Windows Server 2012 (what little I've used in it). Yeah, the Metro interface rears its ugly head there as well, but there are actually features in it that make the annoyance worthwhile. Personally, I can't wait to have the time to play with IPAM, which is supposed to track all of my IP address space for me, instead of me fumbling around with spreadsheets (and trusting others to accurately note changes when they make them). Also, we've finally got DHCP failover, which is at least a decade overdue, in my opinion. I'm a VMware guy right now, but if Microsoft keeps on like they are I could be a Hyper-V convert. ESPECIALLY if VMware keeps ticking off its customer base like they did last year with that offensive vRam Entitlement cash grab (which they've relented on, to be fair). Microsoft is making some serious inroads, and VMware will need o step up its game if it wants to keep ahead.

If you need a good overview of what's new in Server 2012, there's a really good series of articles over at that delves into it.

Tuesday, October 2, 2012

vCMA - VMware vCenter Mobile Access Deployment

So today we got all of our Android apps together, opened up some ports in the firewall, and configured our phones so we can manage things from our phones while we're driving. Kidding. So we've got Paessler PRTG monitoring our servers and applications, HP's IMC monitoring our network infrastructure, Spiceworks doing helpdesk and our "IT Knowledge Base", and now VMware vCenter Mobile Access (vCMA) so that we can access vCenter remotely.

Setting up the virtual appliance, which can be found here, was fairly easy; I followed a couple of very nice sets of instructions here and here. I did get a little lost in a couple of places.

  • First, when you download the vCMA, make sure you go with the OVF download. There's a zip file download there that includes some VMDK files and a VMX file, but I couldn't get it to start. 
  • I actually had to break out my mad vi skills (of which mine are scant) to edit some config files in the vCMA appliance, which is based on a CentOS base. I needed to edit the port that vCMA was listening on from 443 from something else, so I cracked open vi and edited the /usr/lib/vmware/mobile/tomcat/apache-tomcat-6.0.28/conf/server.xml file and did a search and replace. To do a search and replace with vi, use this: :%s/foo/bar/gc. This command replaces all instances of 'foo' with 'bar' after prompting you each time. When you're done editing, :wq will write your changes to the file and quit vi, by the way. Using vi makes me thank the FSM for nano and gedit, which is included in modern and "fuller" Linux distros.
  • There is no "app" for Android to make use of this that I could find, although from some of the pictures there seems to be a nice iPad app (boooooo). You just point your mobile browser at https://vcmaserver:port and that's it. The instructions I was following were a little hazy on this. Port 5480 is the management port, but you need to go to the SSL port for the app, which is just browser based. I could complain about it being browser-based, but to be fair, I'm not going to be doing "work" from my phone. I just want the ability to reboot some server from the park if I need to, and this achieves that. 

Realistically, I've no reason to complain at all. I really like that VMware has made this capability available, and this is what's called a "fling", meaning that it's unsupported and just for fun. Like hell, that's a handy capability! Thank you VMware! 

P.S. Could you please make an app like the iPad has for Android?

Friday, September 28, 2012

Automating SQL Express Backups for Fun and Profit

Here's how to back up SQL and SQL Express databases automatically.

Phase One:

  1. Create two folders on the C Drive: C:\DB_Backup and C:\ScheduledTasks. DB_Backup is the target for the backup job, and ScheduledTasks holds the SQL script that will be triggered via a scheduled task.
  2. Install an appropriate version of SQL Management Studio Express (SSMS).
  3. In SSMS, connect to the instance by using <ServerName>/<InstanceName>. An easy way to find the instance name is to open the services console (Start, Run, services.msc, enter) and look for your SQL service. The instance name will be within parentheses next to it.
  4. Now, right click on the database and choose Tasks-->Backup
  5. In the destination area, choose to back up to disk, and click the 'Add' button.
  6. Navigate to the DB_Backup folder that you created, and make up a file name with a .bak extension.
  7. Click on the 'Options' page on the left, and select 'Overwrite all existing backup sets'
  8. Place a check mark next to "Verify backup when finished'
  9. Click the down arrow next to 'Script' at the top of the window, and choose 'Script Action to New Query Window' -- This is an excellent way to learn SQL, by the way. You can do this from nearly everywhere in SSMS.
  10. Now, press the cancel button on the backup window (we're not going to back it up now)
You should now see the SQL code for the backup in a query window. Here's an example:

BACKUP DATABASE [DatabaseName] TO  DISK = N'C:\temp\DatabaseName.bak' WITH NOFORMAT, INIT,  NAME = N'etl-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
declare @backupSetId as int
select @backupSetId = position from msdb..backupset where database_name=N'DatabaseName' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'DatabaseName' )
if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''etl'' not found.', 16, 1) end

Phase Two:

  1. Click File-->Save SQLQuery1.sql As... and save the sql script file to C:\ScheduledTasks
  2. You can now close SSMS
  3. Open Task Scheduler
  4. Right-click 'Task Scheduler Library' on the left, and choose 'Create Task'
  5. Name it "<Server> SQL Express Backup"
  6. Choose an appropriate user to run as
  7. Select Run whether the user is logged on or not
  8. Select Run with highest priveledges
  9. Choose highest "configure for" level available
  10. Select the Triggers tab
  11. New....
  12. Set the scheduled time.
  13. Select the Actions tab
  14. New...
  15. In the program/script section, put the path to sqlcmd.exe. Mine is "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\SqlCmd.exe" (if the path includes spaces, use quotes). Yours might be different depending on your version of SQL.
  16. In the arguments field, put the following: -S <ServerName>\<InstanceName> -i "C:\Scheduled Tasks\<ServerName>-<DBName>.sql"
  17. Click OK
  18. Once your actions are completed, click on the settings tab
  19. Change the "Stop the task if it runs longer than" value to 1 hour. You can change this depending on how long an actual backup takes for your database, of course.
  20. Click OK to finish creating the task
  21. Enter the credentials for the user account that is to run the task
  22. Run the scheduled task to test it, and check C:\DB_Backup for the .bak file(s)
And that's it! Setting this up on our SQL Express databases saved us a ton of money....

Thursday, September 27, 2012

Heads-Up Display for Network Monitoring (Combining Multiple Websites into One)

When I started my new job, one of the first things I did was to purchase and set up Paessler's PRTG network monitoring software. PRTG is a great piece of software for the money. It's VERY worth it to have network monitoring up and running. You get notifications when something goes down and access to the monitoring software via your cellphone (Android or iPhone). On top of that, it can track historical usage data, which I find most helpful dealing with disk space It's nice to be able to know a month ahead of time that you're going to run out of space on a drive and that you need to do something about it.

Another thing I love about having an at-a-glance display of my servers and applications is that when I'm done patching and rebooting tons of servers (I have 60), you can look at a pretty picture and know immediately whether everything's working correctly or not. We have a 50-inch TV in our hallway connected to a computer that is used to display various pretty pictures from our monitoring websites, and I had to learn a little bit (more) about HTML to make it work properly. Here's my HTML code:

<title>This is how we do it......</title>
<frameset cols="65%,35%">
<frameset rows="65%,35%">
<frame src="https://MyPRTGServer/alarms.html" />
    <frame src="http://MySpiceworksServer/tickets/list/open_tickets#" />
<frame src="https://MyPRTGServer/Sunburst" />

The result:

I put a link on the desktop, and after I open the webpage the individual components load. I then press F11 to make it full screen, removing the dozen toolbars that are somehow installed (I kid). One caveat to this is that if you are opening HTTPS websites within your frames, and those pages have a self-signed cert, you will need to open them individually first so that the browser knows that you trust them (I understand the risk!). Only after doing that will your browser allow you to open those pages within an iframe.

So, the webpage. You have two columns. The first column is set up on line 5, and is 65% of the screen. This real estate is then split up 65/35 to show our PRTG alarms on the top, and our Spiceworks helpdesk tickets at the bottom. The second column shows the PRTG Sunburst. The trick with using the frameset tag this way is that you declare the columns, then declare the rows inside of that column, then declare your next column, etc. When I come upstairs in the morning, after I check the server room, I walk past this display and it's the first thing I see. If there's an error, it will be red, pink signifies errors that someone's already acknowledged, and if it's all green, then I go make myself some coffee!

I recommend that every sysadmin learns a little bit of HTML. You don't need to mess with CSS style sheets and Java code (though it might help you more, depending on how "dev" your role is), but it's nice to quickly whip up a page of links, or edit an IIS error page on the fly. One great resources I've found (and use) for HTML coding (by the by, I think programmers in general despise when you call writing HTML 'coding') is the W3Schools website found here. I recommend using notepad++ for any text editing, including HTML and Powershell Scripts, by the way.

Tuesday, September 25, 2012

WSUS Logs and a GREAT Performance Monitor tool I found

Some finger pointing went on at work, so I thought it might be wise to figure out where WSUS kept its logs. Nothing's going to come of it, but it just shed light on something that I didn't know about. Who knows? Someday I might need to know where they are! The complete list is here on technet, but the part that I'm most interested in is, "Who approved which WSUS updates on what date?" For that, you need to look at the file %ProgramFiles%\Update Services\Logfiles\Change.log.

I also was pouring through's master list of "free" sysadmin tools. As a cultural aside, it's unfortunate that we need to put definitive words like 'free' in quotes so that people don't take them literally. If the word you're using isn't accurate, pick a different word; we have plenty to choose from! I digress. There are some real free (literally) gems in there that I use quite often (Spiceworks, Notepad++, EMCO MAC Address Scanner, and WinDirStat to name a few of my favorites). There's also a lot of things that are decidedly NOT free, or there is a free version, but it's crippled enough to make it nigh unusable. Today while going through the list I found an app called ControlUp. This is free up to a point, but I have a relatively small environment, and even fewer servers on which I need to monitor performance. This little app has definitely earned a place on my monitoring screen.

Monday, September 24, 2012

Starting Processes or Commands Within Powershell

It's been a busy couple of days. Saturday night I spent at work doing this month's Windows patches. I'm tasked with trying to put together a patching procedure for about 70 servers that can be shared among multiple admins so that we don't step on each other's toes. Our SAN doesn't have a whole lot of throughput, so when multiple VMs are patched simultaneously it tends to slow everything down. It's almost easier to do it myself, really, but in the long run my family (and my sleep schedule) will thank me for spreading the load.

New Powershell command of the week: Start-Process
This is the command I ended up using to automate my Veeam Backup & Recovery jobs. One example for usage is:
start-process "C:\Robocopy.exe" -ArgumentList '"j:\Backup Exec\Backups "Z:\Backup Exec\Backups" /MIR /NP /LOG:C:\RobocopyBE.log'

Notice that my argument is blocked within single quotes, and then the paths, which include spaces, are blocked in double quotes, as is customary with this command. The single quotes tell Powershell to parse the text in between them as is. Here is a really cool blog post about the different ways that Powershell can run commands. While start-process is itself not addressed, usage of Invoke-Command and Invoke-Expression are. I like that there are multiple ways to accomplish things in Powershell. Sometimes I end up beating my head against the wall trying to get a command to work a certain way, and if I don't get too stubborn, I find that I can usually back up, find a different methos to achieve my ends, and then move on more quickly.

Friday, September 21, 2012

Screen Real Estate Upgrade

So I got a new HP Elitebook 8460p laptop at work, and it's very nice. Having an SSD makes SO much of a difference. Therefore, I had to reinstall everything. What a pain. While I was waiting for the 100+ Windows Updates to install, I decided to look around the web and see if I could find any new and useful tools. I do a lot of multitasking (this laptop has 16GB of RAM and I have a docking station running dual 22" monitors), so screen real estate is quite important to me. Here are 3 really cool apps I have found:

qttabbar - This gives you tabs in Windows Explorer. Did I mention that it's free?
Office Tab - This gives you tabs in Excel, Word, and Powerpoint! There's a more expensive version that does Project and Visio, but I struggle so with Visio that I'm of a single purpose when I'm in that app. It costs 30 bucks, and I'm still in eval mode, but so far so good. I really only need the tabs for Excel.

And the prize for most amazing app of this decade (I'm really excited about this, can't you tell):
Actual Multiple Monitors

I can't say enough great things about this software. I was reading about other apps that fit this niche and everyone had one complaint or another about them. I downloaded this and was blown away. I can pin stuff to either screen, and the items on the screen only show up in that screen's taskbar. I can have seperate toolbars for each screen's taskbar. It even allows my desktop to be continuous horizontally and vertically! Seriously, if I take my mouse off the left side of my second monitor, it appears on the right. How cool is that? I used it for 5 minutes (and most of that was configuration) and immediately plunked down $30 for it. It's worth its weight in gold as far as I'm concerned.

I'm also playing around with Launchy, which is a ... er.. launcher that you just type things into. This seems a lot like me just hitting the start orb and typing, but I'm checking it out. I know it's pretty popular, so it must be useful in some other capacity. The less I HAVE to use my mouse, the better.

In Powershell land, I discovered a very nice function called New-PSDriveHere that allows you to easily create PSDrives. Mad props to The Lonely Administrator.

Here's a way to "Open Powershell Prompt Here" from Windows Explorer, like we used to do with DOS back in the day.

Wednesday, September 19, 2012

Powershell - Adding A Trusted Server

So I'm going through and automating my new infrastructure. I don't have the money for a log analysis tool, so I've created a tidy little script that will run on each of my servers that will gather error and warning events from the system and application logs over the past day and email them to me.

#Get yesterday's date and set some headers for the email I'll create later.

$Date = ((get-date).adddays(-1))
$SystemHeader = "`r`n+++   System Log Errors and Warnings    +++"
$ApplicationHeader = "`r`n+++ Applications Log Errors and Warnings  +++"
$DelimiterLog = "`r`n++++++++++++++++++++++++++++++++++++++++++++"
$Newline = "`r`n"

#Get the hostname
$Hostname = ($env:computername)

#Get Error/Warning events from the System log
$System = (get-eventlog system -after $Date -EntryType Error,Warning | select Entrytype, Source, EventID, Message | ft -autosize)

#Get Error/Warning events from the Application log
$Application = (get-eventlog application -after $Date -EntryType Error,Warning | select Entrytype, Source, EventID, Message | ft -autosize)

#Craft and send the email
$Body = $DelimiterLog
$Body += $SystemHeader
$body += $DelimiterLog
$body += ($System | Out-string)
$body += $Newline
$body += $Newline
$body += $DelimiterLog
$body += $ApplicationHeader
$body += $DelimiterLog
$body += ($Application | Out-string)
Send-Mailmessage -from "" -to "" -subject "Log Errors and Warnings from $Hostname" -smtpserver mailservername -body $body

I put the script on my central "management" server that performs all of my scheduled tasks. Everything ran fine when I tested the script, but most of my scheduled tasks failed. I remoted (that should be a verb in the dictionary at this point, methinks) into one of the servers that didn't run the script correctly, and lo and behold, when I ran the script from the local prompt I got a weird prompt:

Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run \server\scripts\my.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"):

This won't do!
After hunting around on the web I found out that the server you run the script from needs to be a trusted site in your Internet Settings. The easiest way I found to accomplish this was pushing out the following registry key via group policy:
Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\EscDomains\<servername>
Now, create a REG_DWORD named *, and set the data to 2 (decimal)
Because I wanted to trust our DFS infrastructure, the servername key was our root DFS namespace server.

And voila! No more prompts!

Tuesday, September 11, 2012

Calling Veeam jobs via a Powershell script

So I've been playing with using Veeam via Powershell. Well, I've only tried one thing, but I have it working now. I needed a Powershell script that would run different actions for different days. This Powershell script would be called at the end of my Veeam Daily backup job which starts every day at 5:30. I'm doing reverse incremental backups, so the end time of my job can fluctuate, and I needed something that would take a diffferent action based on the day of the week.
First, I created a scheduled task to run a simple Powershell script that wrote the day of the week to a text file, like so:

$DayOfWeek = ((get-date).dayofweek)
$DayOfWeek | out-string | out-file c:\temp\VeeamDayOfWeek.txt

This job is scheduled to run every day at 5pm. My backup job starts at 5:30. When my backup job completes, it then runs powershell.exe -File c:\PS\VeeamJobEnd.ps1.

Add-PsSnapIn VeeamPSSnapIn
$DayOfWeek = (Get-content c:\temp\VeeamDay.txt)
If ( ($DayOfWeek -like "*Monday*") -or ($DayOfWeek -like "*Tuesday*") -or ($DayOfWeek -like "*Wednesday*") -or ($DayOfWeek -like "*Thursday*") ){
start-process "C:\Program Files\Symantec\Backup Exec\bemcmd.exe" -ArgumentList "-o1 -jBEJob-BackupToTape-Diffs"
If (($DayOfWeek -like "*Friday*")){
Get-VBRJob | where {$_.Name -like "Weekly*"} | Start-VBRJob

It was a bit tricky to get the Veeam SnapIn name. All of Veeam's documentation tells you to run the Powershell console from within the Veeam GUI. That's no good for automation!!! I then remembered using Get-Module (to find modules registered with powershell that reside on a system), and its counterpart Get-PSSnapIn. Sure enough, the name came up in my list of snapins. Then, I use Get-Content to read the day of the week from the text file that run previously. If you're wondering why I just don't calculate it within this file, keep in mind that the Veeam job that finishes may get done at 11:30pm, or it may get done at 5:30am (the next day). Since there's variance between what day of the week it might be when the job finishes, I thought this was the best approach. Then we go with the IF statements: if it's M-Th, then I'm telling Backup Exec to start a job. If it's Friday, then I use the Veeam SnapIn to get my weekly backup job and start it. This was a pretty counterintuitive thing to work out: I'm speaking about this line:

Get-VBRJob | where {$_.Name -like "Weekly*"} | Start-VBRJob

The name of my Veeam backup job is "Weekly Backup", so I thought I could just do Get-VBRJob "Weekly Backup", but it didn't like that at all. So, I performed a Get-VBRJob | Get-Member to see what other properties I could use. Nothing worked, so off to Google I went, and found this method. It works!

Wednesday, September 5, 2012

PSA: Test your Directory Service Restore Mode Password

Directory Service Restore Mode is what you have to use in order to authoritatively restore things in active directory, like a domain controller. Or something. I've never actually used it outside of a lab until this past week, when I wanted to move where the Active Directory logs were. You can find that process here.

The shocking part came when I needed to boot into Directory Service Restore Mode (DSRM from now on, crikey). On a domain controller (and only a domain controller) you access DSRM by hitting F8 before Windows boot (just like going into Safe Mode) and then choosing DSRM from the list. I made it to safe mode, but didn't know the DSRM password. It wasn't in our password database either. Hmmmm.... and no one else knew it. It's a damn good thing that we weren't in some DR situation where we needed it!

So, please take a minute to check your DSRM password before you REALLY need it. If you need instructions on how to reset it, then look no further.

Saturday, August 25, 2012

Working With WSUS (Windows Update Server) from the Client Side

WSUS (Windows Software Update Server) is what us admins use (usually) to push out the Windows patches every month. I've had some occasions where my clients were behaving oddly in the past and there are some tricks I've learned over the years on how to deal with this. So, here goes:

If you are imaging machines, it will be easy for the WSUS ID to get stuck in the registry and then propagated out to your computers. You won't notice this unless you compare your real inventory to the computer listed in the WSUS Admin Console. When an ID is used by more than one computer, then your reporting is off. What happens is that ComputerA will check in with ID#4 (The ID's a lot longer than that, but bear with me). ComputerA will get its updates and be happy. Now ComputerB will check in with ID#4. To WSUS, this looks like the computer changed its name. ComputerB will get its updates, too. The problem isn't that the computers won't update. Run that scenario again, but this time let's say ComputerA had errors and patching failed for one or many of the patches. If ComputerB checked in before you ran a patching report, you'll never know that ComputerA had issues.
To combat this, I like to delete the ID from the registry. It's one of the very last things I do when I build/image a new computer. You can do this while the computer's running safely.
reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v SusClientId /f

After the key is deleted, you should restart the "Windows Update" service. It's important to note that the name of this service is different between Windows XP and Windows 7.
In Windows XP you'd run:
net stop "automatic updates" && net start "automatic updates"
In Windows 7 you'd run:
net stop "Windows Update" && net start "Windows Update"

Or, in Powershell:
restart-service "Windows Update"

That's much easier, but I've been messing with WSUS since before I became Powershell savvy, and old habits die hard....

There's a command to regenerate the SUS ID we deleted from the registry (that's not available in Powershell), and it uses a switch within THE command that Windows manages its updates with: wuauclt.exe.

This is kind of a weird little CLI program, because entering wuauclt.exe /? won't get you anything. The commands I use are as follows:
wuauclt.exe /detectnow - forces the computer to check with it's WSUS server for new updates.
wuauclt.exe /resetauthorization - this is the command that re-registers the computer with WSUS, and generates the new SUS ID.
Note that you can combine the switches on a single line, like this:
wuauclt.exe /resetauthorization /detectnow

Thursday, August 23, 2012

Miscellaneous How To's So I Don't Forget!!

I'm using this blog as much for me as for my readers (which I only have a handful of, but whatever). It's another source of documentation, as far as I'm concerned. So today, I've got some neat tricks that I perform fairly often, and I'm sick of searching the web for them every time I need them.

Disk Cleanup in Windows 2008 R2

Did you know that Windows 2008 R2 doesn't come with the Disk Cleanup app installed? I have NO idea what Microsoft is thinking, but it's still on the system; you just have to copy some files and make a shortcut to use it.

  • Move cleanmgr.exe from %systemroot%\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7da to%systemroot%\System32
  • Move cleanmgr.exe.mui from %systemroot%\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-us_b9cb6194b257cc63 to %systemroot%\System32\en-US
  • Now go back to %systemroot%\system32 and send-to --> desktop (create shortcut), or you can simply type cleanmgr.exe into the run dialog box since system32 is listed in your system path variable.

Bit and Byte Conversion

I found this handy website to convert bits to bytes and vice-versa

Creating Test Files of a Certain Length

Here's a handy command to create test files for testing copy speeds, for instance. Note that you have to run the command prompt as administrator if you have UAC turned on.
C:\> fsutil file createnew <filename> <filesize_inbytes>

For Example (This creates a 1GB file):
C:\> fsutil file createnew C:\Temp\Test_File_1GB.txt 1073741824

Removing Old Patch Installation Files in Windows 2008 R2

Back in Windows Server 2003, one way to reclaim disk space was to delete hidden folders with the $Uninstall prefix from the C:\Windows folder. This would make it impossible to uninstall updates, so you had to be relatively sure that you wouldn't need to do that. There are ways around that: Build a 2003 Test machine, install the update that you need the uninstall files for, and copy the $Uninstall folder for that patch back to the server.

This process was altered in Windows Server 2008 and again in R2. For Windows Server 2008 R2, the command you need to run (as an administrator) is:
C:\> dism /online /cleanup-image /spsuperseded.

Monday, August 20, 2012

Things get complex, but don't forget to check the easy stuff first

So my Veeam backup solution wasn't performing as I thought it should. Veeam had direct access to my SAN via a 4Gb FiberChannel HBA, and was only pulling around 40MB/s. I was pulling my hair out and finally posted on a forum for assistance. The advice? Check the driver on the HBA card. Lo and behold, the QLogic HBA card was using a MICROSOFT driver. That's just no good. After updating the driver, my speeds went from 40MB/s to anywhere between 130-250 MB/s. Now that's a speed boost!

I think it's important that we as technical people strive to maintain good troubleshooting skills. Sure, the infrastructure gets complicated, but don't ever forget that the best fix might be the easiest. Your internet isn't working? Start from the bottom and work your way up. Sure, it might be a routing issue, but put traceroute away for a minute and check that the network cable is plugged in. Keep it simple, stupid.

Saturday, August 18, 2012

Ice Cream Sandwich and my battle with backups (Backup Exec, Veeam, and SQL Oh My!!!)

I've been using Firefox for, well, a very long time - IE6 era, I think. I've been hanging on because Firefox could sync my bookmarks and passwords between all of my devices. My Samsung Galaxy S2 was on Android version 2.3.3 (Gingerbread) and I couldn't run Chrome on it. I was pretty close to rooting it, but I fix computing devices often enough, and my phone is a device that just has to work. My life is too busy to have it go down and immediately be able to sit down and fix it. Aaaanyway, I got the Ice Cream Sandwich update from Sprint this past week (which was out in, what, March?) and NOW I can run Chrome!!!! My favorite thing so far is Chrome-to-Phone, and my biggest pet peeves are the lack of a decent FTP program like FireFTP on Firefox, and the absence of any command to sort my bookmarks alphabetically. I've also installed Google Cloud Print, but haven't needed to use it yet. The extensions are pretty amazing:

  • I can post to this blog, my blog on Tumblr (The LAG), Twitter, Facebook, etc with AddThis
  • X-Notifier checks all of my email accounts on a regular basis
  • Facebook, Google Phone, and Skype all have good add-ons as well
I really like Ice Cream Sandwich, too; I'm just more excited about being able to switch the default browser on all of my computers to Chrome. My notifications are much more descriptive, and it looks like I've gotten a battery boost.

At work, I've rolled out PRTG for network monitoring. Having a good network monitoring solution in place saves the IT department A LOT of headaches, especially with software as robust and flexible as PRTG. Having the ability to see all of your problems (with some exceptions) at a glance saves a ton of time. I turn off monitoring while I run the monthly Microsoft updates, then when I'm done rebooting the servers I can flip PRTG back on and see almost immediately if anything didn't come back on correctly. The BEST thing about something like PRTG though, and the most indispensable feature, as far as I'm concerned, is the tracking of sensors over time. Now, I can look at some graphs and predict when we might need to add drive space, or if someone copies a giant 20GB file to my server I can detect it immediately and possibly head things off before they bring down the server (this has happened more than once).

Also, I'm completely redesigning the backup situation. We were using Symantec Backup Exec 2010 R3 exclusively for backups, and we weren't using it right. We needed a lot more licenses to comply with the licensing requirements for myriad SQL servers, and the backup process just wasn't working well. Backup Exec is.... difficult to use. For me, anyway. Maybe with some training.... oh never mind. So I sold the boss-man on plunking down about $14,000 for the Veeam Management Studio, which includes Veeam ONE monitoring for our VMware Infrastructure, which plays into the previous topic. So I've finally got Veeam Backup and Replication 6.1 in place and it's doing it's first backup as we speak. Well, technically it's the second, but the first backup was so slow I had to scrap it. You see, Veeam B&R can talk directly to the SAN over a Fiber Channel HBA, instead of trying to move 5 Terabytes (in my case) over a gigabit network link. I misconfigured it the first time and was in network mode. Veeam also does Reverse Incremental backups, Change-Block Tracking (only backs up things that have changed), and is really good at deduplication (REALLY good). I'm hoping to cut my storage needs in half, and shrink my backup window big time. For example, my current backup has now read 3.4 TB, and only written 1.9 TB to disk. A little over half. Monday's backup will only back up what's changed from this one, so it should run pretty fast. I'll post some more stats once I get the process going and smoothed out. When Veeam's done, it will automagically launch my Backup Exec job, which will write the Veeam backup to tape. So, here's how to launch a backup exec job with Veeam: Edit the job, select storage, click the advanced button, and click on the advanced tab. The bottom section allows you to run commands at the end of a backup job. The command to launch a backup exec job looks like this: "C:\Program Files\Symantec\Backup Exec\bemcmd.exe" -o1 -jJobName. Use the quotes in there, by the way. Here's a handy web page that outlined the rest of the details.

While I was auditing what was being backed up, I kept finding little SQL Express instances everywhere. Symantec requires a special agent to properly back these up. Instead of plunking down around $9,000 more for the appropriate licenses to back up around 15 more SQL Express servers with Backup Exec, I'm skipping the SQL instance in my backup selections, and will just run a scheduled SQL script as a scheduled task to back up the database to a .bak file before the regular file backups run. Here's the page that gave me the instructions on putting this together. It's talking about backing up your VMware vCenter database (if you store it in SQL Express), but the same technique applies to other SQL Express instances.

Wednesday, August 8, 2012

Working with Scheduled Tasks

So I've been getting the IT "general purpose" server up and running. I have all of my management tools installed, and now it's time to automate some reporting and maintenance functions. Right now, I have 3 scheduled tasks:
  • Windows Service Monitor: This Powershell script looks at many of my servers and the services that I deem "important". It emails me if it finds any that aren't running. It's a terrible script; I pretty much did a Get-WMIObject query, ran some logic with a where-object statement and then dumped it out to a file. The beast is 900 lines long or something ridiculous (done through copying and pasting, not by hand). I KNOW that there's a more elegant way to do it, but I haven't got the time to spend on it. It does what it needs to do.
  • Website Monitor: This script downloads the html code from a couple of web pages, then runs select-string to return a true or false based on whether certain text appears in the code. If the expected code isn't there, the script emails the admins, because something's not right.
  • The "Start Patching" script: We run this script before we start patching to turn off our monitoring software. First, it disables our SNMP monitoring software, then it stopd the Spiceworks service to take spiceworks offline. We don't want a bunch of emails that so-and-so system is down. The last thing it does is to disable the above two scheduled tasks, and let me tell you that this took me a bit to figure out. Yeah, you use powershell, but you use it to call a CLI program called schtasks.exe, which I think is pretty weird for Windows Server 2008 R2. You'd think they'd have some powershell commands for the Tasl Scheduler, but no. So here's the syntax for disabling a script:
 $CommandDisable = 'schtasks.exe /Change /S ComputerName /TN "Windows Service Monitor" /Disable'
Invoke-Command $CommandDisable

So first, I put the command into a string variable. Using single quotes forces Powershell to interpret the double quotation marks literally (as part of the string itself). Then I can use Invoke-Command to run the string as a command. Schtasks.exe has a ton of syntax options. Way back when I remember using it to set up scheduled tasks from scratch from a batch file; it's got really great functionality (too bad it hasn't been ported to powershell (grumble grumble). Every Windows computer has schtasks.exe, so if you're curious pop into a cmd shell and type schtasks /?. The help function is layered, so you can type schtasks /Change /? and get help specific to the subset of the command structure. Fun stuff.

Tuesday, August 7, 2012

Just some links

While I've learned a lot, I really haven't had much time to actually write about any of it. I'm in the process of rolling out Spiceworks as a helpdesk and inventory solution. It's kind of clunky to work with, honestly. My employer wants layers of helpdesk categories, and that functionality isn't built-in, so I have to rely on volunteer developers for a solution. I LOVE that people like to code, and I love free stuff (who doesn't) but in a year when I have to upgrade this those people might have moved on, and the project abandoned. I just don't have the time to get intimate with a solution right now, so I'm taking the position that if we want all the features of a for-pay helpdesk system, then we pay for it. If you want something nice and flat, then sure, we can do Spiceworks and save a few grand.

I'm also working on learning about our SAN, and our vendor (Mass Mountain) gave me a flash drive with their software on it (it's a 60 day trial once you create a volume group). You just boot from the flash drive and run it on whatever hardware you have lying around, which is pretty cool. I wanted to run it in VMware Workstation 8, where I have a VMware lab - 3 ESXi hosts in a cluster with High Availability and Distributed Resource Scheduling configured, a domain controller, a vCenter server, a FreeNAS iSCSI server, an OpenFiler NFS server, and a Server running Veeam that's backing up 3 servers running ON the ESXi cluster. Unfortunately, I had some problems getting VMware Workstation to boot from the USB drive. I found this workaround using a boot manager ISO called Plop, which did the trick.

Wednesday, July 25, 2012

So busy with the new job!

So I've been at the new job almost a month now. I'm a "Technical Analyst" now as opposed to a "Network Administrator", and before I took the job I was concerned with how my resume would look in a few years time (petty, I know), but I really couldn't care less at this point; I'm glad I took the plunge. I'm in a much bigger environment, which is intimidating, but I'm not really in charge of the network, so that's ok. I'm in charge of a few pretty neat projects right now:

  • Implementing a patching procedure - these guys have been short staffed for a while, and we need to get caught up, the work needs to get spread evenly, and dependencies need to be better identified.
  • Learning all about our SAN. I haven't ever really worked with a SAN, so this is kind of a biggie.
  • Reworking the backups. They're failing too often and taking too long. I hate Backup Exec..... I'm going to implement Veeam for the virtual servers. Love Veeam!!! Long term, that'll enable us to easily replicate production servers into a lab environment, which is going to be necessary because.....
  • Moving the domain controllers to Windows Server 2008 R2 from 2003. Most of my mad AD Powershell skillz are languishing because our DC's are so old and I can't talk to AD with Powershell. I know, I know, I could install some things and make it work, but my time is stretched so thin that if I'm going to spend any amount of time on it then I might as well kill two birds with one stone.
  • That said, I did make a very nice Powershell script the other day that backs up a SQL database, moves it to another SQL server, restores it to the SQL instance there, and then runs a database consistency check on it (emailing the results of course). I might write out what I did in a subsequent blog post, but the script's a doozy. I'm also working on consolidating our SQL servers. They've bred like rabbits here - every app needs it own SQL box. Not on my watch!
  • I'm in charge of our VMware infrastructure now. The first thing I did was to run vCheck, which is an automating powershell script (a ton of them actually) that I picked up here to check for common misconfigurations and looming problems. Then, I got a 30-day trial for Veeam ONE (Veeam's monitoring and alerting software) to get a handle on performance and other issues. I'm finally getting a handle on the enormity of the stuff I need to clean up. Here's a tip: when you create a new VM, don't give it a ton of CPUs. I have a server with 8 vCPUs that isn't doing jack, except slowing everything else down. From what I've read, if your VM has X vCPUs assigned to it, it needs to wait until that many physical CPUs are free before it can get seom CPU time.
  • I'm working on moving from our current helpdesk software, TrackIT, to Spiceworks, which just RTM'd version 6.0. We want to use Spiceworks for inventory tracking, helpdesk, and monitoring (although I admit to having my doubts on that use). I haven't dealt too deeply into this project yet.
So, I'll post again soon, and maybe I'll do a write up of that SQL script I mentioned above....

Wednesday, July 11, 2012

Trouble Syncing Samsung Galaxy S2 with Exchange ActiveSync

Trouble Syncing Samsung Galaxy S2 with Exchange ActiveSync

I had no amount of trouble syncing my phone (Samsung Galaxy S2 from Sprint) with my new employer's Exchange ActiveSync. I would put in all of the info, and it would seem to take it, but my email never showed up in the email client. After much unsuccessful googling, I found out about an online tool that Microsoft offers called Microsoft Remote Connectivity Analyzer. You tell it what you want to test, plug in your credentials, and it tests the path all of the way in to your actual mailbox and tells you where the connection messes up. In my case, it even linked to a handy Technet article that told me step-by-step how to remedy the problem. The problem was one little checkbox in Active Directory on my user account. Apparently, the problem takes root when a user account is a member of a protected group, such as "Domain Admins". This causes some ACLs to be set up differently and makes trouble for ActiveSync. You can find the Technet article here.

Tuesday, July 10, 2012

Mounting VMDK Files as a Drive Letter

Mounting VMDK Files as a Drive Letter

Wow, no posts in 14 days. I started this blog hoping to post every day, but it's just not going to happen with the new job and everything. So one of our application servers blew up, due to running out of space when doing updates. I needed some files off of the server, because the backups are all screwed up (well, they work, but they aren't friendly to access at all and I had all kinds of problems getting the .NET framework in place to, in turn, get the Backup Exec agent installed so that I could fail multiple times to run restore jobs). Today I learned that you can take a vmdk file and mount it under Windows! I installed the VMware Virtual Disk SDK, and then used vmware-mount.exe to mount it and explore as a normal file system. Worked like a charm!

Tuesday, June 26, 2012

Firefox Mobile for Android has been updated!

Firefox Mobile for Android has been updated!

Sorry for not posting much lately (to the 3 of you looking at this site on a regular basis). I promise I'll have more to post in the coming weeks and months, as I start my new job in a much larger and diverse environment that I'm used to.

Today, though, I will tell you that the newest version of Firefox Mobile has hit the Google Play Store. So far I like it. It's faster and more well-organized than the last version.

Thursday, June 21, 2012

Powershell 3 session from TechEd 2012

Powershell 3 session from TechEd 2012 

Powershell 3 will ship with Windows 8 and Windows Server 2012 later this year. While the talk starts out like it's for noobs, the pace picks up as they show you some really fantastic things that you can do with version 3 of Powershell. You can find the talk here. There's also an article from Technet which outlines some of my favorite new features here.

Some of the things that I like best:
  • The ISE now offers Intellicode and a native command-list sidebar
  • There's a new GUI to help you write commands (checkboxes and things for setting options and parameters)
  • Help files that you can update
 Admins will be led more and more into installing servers into 'Core' mode with Windows Server 2012, and Powershell will become increasingly important. There are several hundred more commands in Powershell 3 than in version 2, so get started on it!!!

Wednesday, June 20, 2012

Windows 7 imaging method

Windows 7 imaging method

We are using Acronis to image our mix of desktop and tablet computers, and I was charged with coming up with the methodology on imaging computers when we started rolling out Windows 7. Learning how to work with Windows' deployment kits was just stupid. The documentation is terrible and there are a ton of different programs internal to the AIK that you have to magically just know how to use. With my complaining out of the way, let me tell you how I accomplished getting Windows 7 out to my people.
First off, a little about my environment, which directly impacted my options and enabled me to do what I did. I only need to create images for 2 different desktop and 2 different tablet models. If I had to keep track of more models, I probably would have been forced to do it "Microsoft's way". We're upgrading to Windows 7 a little at a time, so I don't need to worry about using multicast to upgrade a lot of computers at once. Now, on to the process of creating a master image:

  • Install Windows 7 Professional from the DVD, creating a generic "user" account along the way, and giving the computer a name like HP2730MSTR (combining the hardware model with "Master")
  • Enable the local admin account and set its password
  • Activate Windows 7 with your key
  • Log off of "user" and log in as the local administrator account you just enabled
  • Install Drivers, Windows 7 SP1, and Windows updates
  • Uninstall some Windows Features (Internet Printing, Windows DVD Maker, Games, Windows Media Center, Windows Fax and Scan)
  • Install Office 2010, Office 2010 SP1, and Silverlight
  • Change the workgroup: If my domain name is I would change the computer to belong to the workgroup "foo". This allows me to access the file servers on my network.
  • Turn off UAC. Our users are all local administrators (yeah yeah, fact of life here) and this is just annoying to them.
  • Delete the profile and user account of "user"
  • Go into Windows Explorer and turn on "Show hidden files, folders, and drives" and also uncheck the box next to "Hide protected operating system files"
  • Modifications to the Default User registry hive. Break time. The Default User hive stores the registry template for every new user who logs on to the computer. Make a change here, and that setting will propagate to every user who logs on to the computer, provided they have never logged on before (which is why we're addressing it here in the image).To accomplish this:
    • Open Regedit.exe, and load the default user hive. This is accomplished by:
    • Highlight the HKEY_USERS key
    • Click File, then Load Hive
    • Choose C:\Users\Default\NTUSER.DAT (If you can't see the Default folder, then you didn't perform the Windows Explorer step above)
    • Give it a name, it doesn't matter what
    •  Now, expand HKEY_USERS and the folder you just named
    •  The subkey you want is Software/Microsoft/Windows/Currentversion/Runonce
    •  On the right pane, create two new string values, named "Libraries" and "RemPinned"
    • Modify the Libraries item so that the data= c:\libraries.bat
    • Modify the RemPinned item so that the data= c:\rempinned.vbs
So, now your asking yourself, "Where are those files?" Well, you need to create them, and copy in this text.

Filename: RemPinned.vbs
Purpose: This VBS file unpins the default items that Windows 7 pins to the taskbar (Media Player, IE, and Explorer)
Created by: Charles Stemaly (shamelessly copied and pasted from code found somewhere; I'm not very good at VB Scripting)

Option Explicit


Dim objShell, objFSO
Dim objCurrentUserStartFolder
Dim strCurrentUserStartFolderPath
Dim objAllUsersProgramsFolder
Dim strAllUsersProgramsPath
Dim objFolder
Dim objFolderItem
Dim colVerbs
Dim objVerb

Set objShell = CreateObject("Shell.Application")
Set objFSO = CreateObject("Scripting.FileSystemObject")

Set objCurrentUserStartFolder = objShell.NameSpace (CSIDL_STARTMENU)
strCurrentUserStartFolderPath = objCurrentUserStartFolder.Self.Path

Set objAllUsersProgramsFolder = objShell.NameSpace(CSIDL_COMMON_PROGRAMS)
strAllUsersProgramsPath = objAllUsersProgramsFolder.Self.Path

'''''''''''''''''''''''''''''''''''''''Unpin Shortcuts'''''''''''''''''''''''''''''''''''''''
'Internet Explorer
If objFSO.FileExists(strCurrentUserStartFolderPath & "\Programs\Internet Explorer.lnk") Then
Set objFolder = objShell.Namespace(strCurrentUserStartFolderPath & "\Programs")
Set objFolderItem = objFolder.ParseName("Internet Explorer.lnk")
Set colVerbs = objFolderItem.Verbs
For Each objVerb in colVerbs
If Replace(, "&", "") = "Unpin from Taskbar" Then objVerb.DoIt
End If

'Windows Explorer
If objFSO.FileExists(strCurrentUserStartFolderPath & "\Programs\Accessories\Windows Explorer.lnk") Then
Set objFolder = objShell.Namespace(strCurrentUserStartFolderPath & "\Programs\Accessories")
Set objFolderItem = objFolder.ParseName("Windows Explorer.lnk")
Set colVerbs = objFolderItem.Verbs
For Each objVerb in colVerbs
If Replace(, "&", "") = "Unpin from Taskbar" Then objVerb.DoIt
End If

'Windows Media Player
If objFSO.FileExists(strAllUsersProgramsPath & "\Windows Media Player.lnk") Then
Set objFolder = objShell.Namespace(strAllUsersProgramsPath)
Set objFolderItem = objFolder.ParseName("Windows Media Player.lnk")
Set colVerbs = objFolderItem.Verbs
For Each objVerb in colVerbs
If Replace(, "&", "") = "Unpin from Taskbar" Then objVerb.DoIt
End If

Filename: Libraries.bat
Purpose: This file leverages shlib.exe to manipulate the Windows 7 libraries available to your users. I remove the local Document library mapping to a local "Public" folder, then I remove the Music, Pictures, and Videos libraries. Group Policy doesn't have very good methods to manage libraries, so I had to go this route.
Created by: Charles Stemaly
Other requirements: This batch file requires that a file named ShLib.exe be present in you C:\Windows\System32 folder. ShLib.exe can be found here, via the Grim Admin, and his methodology is here if you want to learn about this the way I did initially.

shlib remove "%userprofile%\appdata\roaming\microsoft\windows\libraries\documents.library-ms" "c:\users\public\documents"
del "%userprofile%\appdata\roaming\microsoft\windows\libraries\music.library-ms"
del "%userprofile%\appdata\roaming\microsoft\windows\libraries\pictures.library-ms"
del "%userprofile%\appdata\roaming\microsoft\windows\libraries\videos.library-ms"

Now, back to the list:

  • Still in regedit, unload the Default User hive by clicking on the folder which you named, clicking the File menu, and then choosing Unload Hive.
  • You should turn off the "Network" tree in Windows Explorer (it normally appears under "Computers in the left pane"). This normally allows people to browse computers on your network, and no sir, I don't like it.
    • Still in regedit, expand the following: HKEY_CLASSES_ROOT\CLSID\{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}\
    • Right click on Shellfolder, and alter the permissions to give administrator full control
    • Modify the "Attributes" DWORD value on the right and change its value to b0940064 (I only ever had to alter 1 character, the 9)
  • Now, you can close regedit
  • Ensure that RemPinned.vbs and Libraries.bat are in your C:\ (the root folder)
  • Copy ShLib.exe to C:\Windows\System32
  • Run C:\rempinned.vbs and C:\Libraries.bat to perform their functions for the local administrator profile that you're currently logged in as.
  • Install (and update) all of the applications that every computer of that model needs: Flash, Adobe Reader, Java, etc.
  • Configure any wireless settings
  • Run Windows Update and restart as needed
  • Turn hidden files back off from earlier
  • Ensure that everything is ready to go and the system boots cleanly from and updates you performed.
  • Power the system off
  • Boot the system to your disk imaging software and capture the image as "SysPrep0 - Master"
  • Now, boot the system back into windows, and log in as the local administrator
  • Launch C:\Windows\System32\Sysprep\Sysprep.exe
  • Choose "Enter System Out-of-Box Experience (OOBE), check the Generalize box, and choose "Shut Down"
  • Press Ok and let it work (it will take a few minutes)
  • Now, boot the system to your disk imaging software again and capture this image as "SysPrep1 - Deploy"
  • After this image is captured, reimage the computer using the Sysprep0 image you created earlier, thus reverting it to before you ran sysprep. Label this computer as a master and set it aside.
Every so often, boot up the master computer, run Windows Updates, install any software updates or new programs, then perform the last step again to capture a new sysprep0 and then a sysprep1 image. Doing so ensures that you don't run up against Microsoft's 3 sysprep limit.

I'm sure there are better ways to do a lot of this. I know that you can create an unattended XML answer file and probably get rid of a lot of the workarounds that my method uses. Unfortunately, I have yet to see a really thorough resource for what my options are and all of the syntax for the unattend.xml file.

I tried Microsoft, I really did, but you need to clean up your rollout methodologies, or better document them, or something.