Click an Ad
If you find this blog helpful, please support me by clicking an ad!
Friday, May 31, 2013
Export Device Drivers from a Working System
A while ago I had to rebuild an old server on new hardware, and couldn't figure out what the fiber card that connected to our tape library was. This app helped me out big time. I still don't know, but I was able to export the driver using Double Driver and use it on the new system. It worked perfectly!
Wednesday, May 29, 2013
Understand the Script Before you Run it
Seasoned IT people have heard this a million times over: understand the script before you run it. This is a tale of woe and unforeseen overtime, that could have been avoided but for the mistakes of two intrepid IT pros. It's one of the best reasons to learn PowerShell, in my opinion. There are TONS of useful scripts out there to automate just about everything, and knowing just a bit can help you step through a script and to understand the concept of what a script is doing before you unleash it on, say, Active Directory.
We are in the process of breaking up a gigantic file server (2 TB) into 3 chunks. Having a file server this big is a big albatross. According to my math, restoring this puppy from backup would take around 36 hours. Longer-term, my plan is to pair the splitting with some sensible file storage policies and some kind of archiving for static files. Together, these should get the musketeers down to a more manageable size.
My cohort has volunteered to do the after hours work to move the file shares and reconfigure DFS. Being the helpful lad that I am, I gave him a command to make his life easier:
robocopy.exe <source> <target> /COPYALL /MIR
I am infatuated with robocopy. It's such a great little program. Copyall ensures that NTFS permissions and timestamps are preserved. MIR is the key part of this though; it ensures that the destination folder becomes an exact copy of the source. BUT MIR is a double-edged sword, and will delete files to achieve this end. I gave my cohort the command without explaining it. I really regret that, and it's illuminated that I need to do my part to ensure that people understand the tools that I'm giving them; this includes better documenting my code. I'm not horrible about it, but I could do better. There's always that line in the IT world where you have to assume that someone knows something, though, and it's tough to see where that is, sometimes. Telling him how to open the command prompt might seem condescending, right? Where do you start with someone? Misjudging that line is very easy to do, and can be very harmful.
But, I digress. So my partner runs the command, and moves some stuff one night. Last week, he discovered new stuff in the old "source" folder, so he ran the command again. See the problem? The MIR switch creates a mirror of the source, and about 50GB of files were no longer present in the source, so they were deleted. Ruh-Roh. I was just heading up to bed when my phone went off. He needed a file restore. A 54GB file restore. Of many small files. Not good. I fired up my trusty Veeam Backup & Replication and started restoring files. Wow was this thing moving slowly! I was getting throughput of 40KB/sec! A support call fixed that, but I want to tell you about some other really great things that I learned:
We are in the process of breaking up a gigantic file server (2 TB) into 3 chunks. Having a file server this big is a big albatross. According to my math, restoring this puppy from backup would take around 36 hours. Longer-term, my plan is to pair the splitting with some sensible file storage policies and some kind of archiving for static files. Together, these should get the musketeers down to a more manageable size.
My cohort has volunteered to do the after hours work to move the file shares and reconfigure DFS. Being the helpful lad that I am, I gave him a command to make his life easier:
robocopy.exe <source> <target> /COPYALL /MIR
I am infatuated with robocopy. It's such a great little program. Copyall ensures that NTFS permissions and timestamps are preserved. MIR is the key part of this though; it ensures that the destination folder becomes an exact copy of the source. BUT MIR is a double-edged sword, and will delete files to achieve this end. I gave my cohort the command without explaining it. I really regret that, and it's illuminated that I need to do my part to ensure that people understand the tools that I'm giving them; this includes better documenting my code. I'm not horrible about it, but I could do better. There's always that line in the IT world where you have to assume that someone knows something, though, and it's tough to see where that is, sometimes. Telling him how to open the command prompt might seem condescending, right? Where do you start with someone? Misjudging that line is very easy to do, and can be very harmful.
But, I digress. So my partner runs the command, and moves some stuff one night. Last week, he discovered new stuff in the old "source" folder, so he ran the command again. See the problem? The MIR switch creates a mirror of the source, and about 50GB of files were no longer present in the source, so they were deleted. Ruh-Roh. I was just heading up to bed when my phone went off. He needed a file restore. A 54GB file restore. Of many small files. Not good. I fired up my trusty Veeam Backup & Replication and started restoring files. Wow was this thing moving slowly! I was getting throughput of 40KB/sec! A support call fixed that, but I want to tell you about some other really great things that I learned:
- Veeam Enterprise paid for itself during this process. I was able to boot the VM as it was before the mishap, output a recursive directory listing (get-childitem) to a text file, and copy that file to my hard drive. Then, I did the same thing on the production side and used a program called Beyond Compare to compare the 2 text files to see where my file restore had gone wrong. This is the second time I've had to do something like this, and the hours of labor saved has more than paid for the higher-end version.
- Veeam (actually I think it's an NTFS issue) doesn't like files with a filename and path over 260 characters. How these files are allowed to exist on an NTFS filesystem in the first place, I have no idea, but it will stop a Veeam restore IN ITS TRACKS. Comparing the filesystems of yesterday vs today helped me see what had been restored and what I had yet to do.
- During my support call, it was imparted to me that using the Windows File Level Restore is not a good way to restore a lot of files at once (like 54GB worth of Word and Excel docs, for instance). Veeam takes a few seconds to verify each and every file, which is part of what was slowing me down. The tech showed me that after you mount the backup for the Windows FLR (so you're looking at the browser window) you should open regular old Windows Explorer and navigate to C:\VeeamFLR. Your drives will be mounted here, and you can use Explorer to copy and paste much more quickly.
So, lessons learned:
- Communicate more better
- Assume less
- Veeam Enterprise is gold, baby! (Beyond Compare is well worth the price as well)
- I need to find a way to comb my servers for really long paths+filenames
- Use the C:\VeeamFLR folder to copy from backups back to production; it's just easier.
Friday, May 17, 2013
Chaining Together Veeam SureBackup Verification Jobs
So my most recent conundrum in implementing Veeam Backup and Replication 6.5 Enterprise was getting SureBackup up and running. I wanted to set aside each Sunday to check all of my backup jobs.
Veeam has wisely allowed backup jobs to be chained; I set my first backup job to start at a certain time, and set the next job to start when the first one finishes, and so forth. It's such an elegant way to do things, and I commend Veeam for implementing it. What I don't understand is why they didn't make this feature accessible for any job that could be scheduled. I don't do replications, so I'm not sure if you can do it there, but I know for a fact that you cannot do it with SureBackup jobs.
Therefore, I needed to dust off my chops and head back to starting and running jobs via Powershell. I got my script syntax and methodology from a great post in this thread on the Veeam forums by v.Eremin.
Add-PSSnapin VeeamPSSnapin
$Job1 = Get-VSBJob -name "SB_Daily_1"
$Job2 = Get-VSBJob -name "SB_Daily_2"
$Job3 = Get-VSBJob -name "SB_Daily_3"
#This starts the first Job, then puts the script to sleep for 5 minutes
Start-VSBJob $Job1
Start-Sleep -s 300
#This section checks the status of the last job every 5 minutes, and starts the next one if it's done.
If($Job1.GetLastState() -ne "Working") {Start-VSBJob $Job2}
Else
{
do
{
Start-sleep -s 300
$status = $Job1.GetLastState()
}while ($status -eq "Working")
Start-VSBJob $Job2
}
#This section checks the status of the last job every 5 minutes, and starts the next one if it's done.
If ($Job2.GetLastState() -ne "Working") {Start-VSBJob $Job3}
Else
{
do
{
Start-sleep -s 300
$status = $Job2.GetLastState()
}while ($status -eq "Working")
Start-VSBJob $Job3
}
So, first you need to load the Veeam PowerShell Snap-In, and then you declare your job names as variables for later use. Then, you start the first SureBackup job.
Within each subsequent section, you check if the last job is still working. If not, start the next job, and if it is still processing that job the script enters a do-while loop wherein it goes to sleep for 5 minutes, then checks the status of the first job again. If it's still processing, the script re-executes the do-while loop. Finally, when it sees that the job is finished, it will start the next job.
This loop is repeated for each subsequent job.
Veeam has wisely allowed backup jobs to be chained; I set my first backup job to start at a certain time, and set the next job to start when the first one finishes, and so forth. It's such an elegant way to do things, and I commend Veeam for implementing it. What I don't understand is why they didn't make this feature accessible for any job that could be scheduled. I don't do replications, so I'm not sure if you can do it there, but I know for a fact that you cannot do it with SureBackup jobs.
Therefore, I needed to dust off my chops and head back to starting and running jobs via Powershell. I got my script syntax and methodology from a great post in this thread on the Veeam forums by v.Eremin.
Add-PSSnapin VeeamPSSnapin
$Job1 = Get-VSBJob -name "SB_Daily_1"
$Job2 = Get-VSBJob -name "SB_Daily_2"
$Job3 = Get-VSBJob -name "SB_Daily_3"
#This starts the first Job, then puts the script to sleep for 5 minutes
Start-VSBJob $Job1
Start-Sleep -s 300
#This section checks the status of the last job every 5 minutes, and starts the next one if it's done.
If($Job1.GetLastState() -ne "Working") {Start-VSBJob $Job2}
Else
{
do
{
Start-sleep -s 300
$status = $Job1.GetLastState()
}while ($status -eq "Working")
Start-VSBJob $Job2
}
#This section checks the status of the last job every 5 minutes, and starts the next one if it's done.
If ($Job2.GetLastState() -ne "Working") {Start-VSBJob $Job3}
Else
{
do
{
Start-sleep -s 300
$status = $Job2.GetLastState()
}while ($status -eq "Working")
Start-VSBJob $Job3
}
So, first you need to load the Veeam PowerShell Snap-In, and then you declare your job names as variables for later use. Then, you start the first SureBackup job.
Within each subsequent section, you check if the last job is still working. If not, start the next job, and if it is still processing that job the script enters a do-while loop wherein it goes to sleep for 5 minutes, then checks the status of the first job again. If it's still processing, the script re-executes the do-while loop. Finally, when it sees that the job is finished, it will start the next job.
This loop is repeated for each subsequent job.
Monday, May 13, 2013
Create an Outlook Rule to Act on Emails Received during a Certain TIME Period
I ran into this conundrum while trying to suppress active CPU alarms during our weekly antivirus scans. I don't want to discard CPU activity alarms altogether, and Outlook only has a canned rule for acting on emails received on certain days.
What you need to do is create the rule with the criteria looking at certain text IN THE EMAIL HEADER!
As far as the specific text to look for, I opted to use "2013 23:" (the clock for headers is the 24 hour variety). I left the year in there so that the rule wouldn't falsely trigger on other instances of "23:", which is a pretty generic search term.
Sure, I'll have to edit the rule next year, but this will get me by.
What you need to do is create the rule with the criteria looking at certain text IN THE EMAIL HEADER!
As far as the specific text to look for, I opted to use "2013 23:" (the clock for headers is the 24 hour variety). I left the year in there so that the rule wouldn't falsely trigger on other instances of "23:", which is a pretty generic search term.
Sure, I'll have to edit the rule next year, but this will get me by.
Subscribe to:
Posts (Atom)