This week, my team finally had our fill of Spiceworks' Knowledge Base. Spiceworks is a GREAT free network scanning and inventory tool, to be sure. When we evaluated it, it did everything we needed (and more). The one thing we were unsure of was the implementation of its Knowledge Base.
In Spiceworks, the Knowledge Base allows you to write articles, how-to's, etc. You can either keep them linked to your own login, share them with the team, or share them with the entire Spiceworks community (which is a fantastic community - I get lots of help there). The problem is that your documents are not stored on your server. They're in Spiceworks' cloud.
The cloud should not be used to store important information. I just don't understand all of these businesses are moving important things into it. The thing is, the cloud is only as reliable as your internet connection. So, who do you trust more to keep things running: Your IT Admin(s), or your internet provider AND the service provider? AT&T doesn't give a damn if your business is without internet.
Anyway, we tried to do it the new way. The cloud way. We put a couple hundred documents into Spiceworks over the course of a few months. Of course, there have been several times where our team was not able to access the Knowledge Base because something was wrong on Spiceworks' end.
Finally, we had had enough, and I built a Sharepoint server in an afternoon. We have datacenter licenses for our VMware hosts, and Sharepoint Foundation 2013 is free, so it cost us nothing. I hooked the back end into our IT SQL server, but Sharepoint will install a SQL Express DB if you need it. Keep in mind that there's a 4GB limit to any databases on an express install, though.
This was my first Windows 2012 server, and, well... meh. We manage things through a remote desktop app called mremote, and RDP makes the new server OS a bit challenging to use. You can't pass the Windows shortcut key, so I actually have to mess with the hot corners to get to the start screen. AND since the Windows shortcut key doesn't work through RDP, I can't hit Win+R and run commands that way (or Win+E to open explorer). Anyone have a better remote desktop app that will pass Windows keys I am all ears. This might not be Microsoft's fault - I probably just need to find a workaround....
I digress. So I got Sharepoint all set up and started dumping in our docs. At a previous job, I had attempted to write a script (this was before I learned Powershell) that would copy all of our files out of a Sharepoint repository to a secondary location every day, but it took so long to run that it was ridiculous. I thought I would give it another try. I found a very helpful blog post by Jeffrey B. Murphy at jbmurphy.com that did almost all of the work for me, and then I just added a few bells and whistles to complete it. They must have improved something somewhere (WebDAV perhaps?) because copying the files is MUCH (MUCH) faster. Here's the script:
Add-pssnapin microsoft.sharepoint.powershell
$StartTime = get-date
$TempFile = "C:\Temp\SPFiles.txt"
#This next command deletes the files that were copied over yesterday - it empties the folder
Get-ChildItem \\DRServer\e$\SharepointKB | Remove-Item -recurse -force
$SiteURL = "http://SharepointServer"
$DocumentLibrary = "Knowledge Base"
$Destination = "\\DRServer\e$\SharepointKB"
$spWeb = Get-SPWeb -Identity $SiteURL
$list = $spWeb.Lists[$DocumentLibrary]
#This section creates a list of all of the files that will be moved and outputs to a text file
$FilesMoved = ($list.items | select File | sort File | ft -wrap)
$FilesMoved | out-file $TempFile
#This section actually copes the files.
foreach ($listItem in $list.Items)
{
$DestinationPath = $listItem.Url.replace("$DocumentLibrary","$Destination").Replace("/","\")
write-host "Downloading $($listItem.Name) -> $DestinationPath"
if (!(Test-Path -path $(Split-Path $DestinationPath -parent)))
{
write-host "Creating $(Split-Path $DestinationPath -parent)"
$dest = New-Item $(Split-Path $DestinationPath -parent) -type directory
}
$binary=$spWeb.GetFile($listItem.Url).OpenBinary()
$stream = New-Object System.IO.FileStream($DestinationPath), Create
$writer = New-Object System.IO.BinaryWriter($stream)
$writer.write($binary)
$writer.Close()
}
$spWeb.Dispose()
$EndTime = get-date
#This section sends me an email including start and end times as well as the file list I created earlier
$Subject = "Sharepoint KnowledgeBase Copied to DR Server"
$Body = "Start Time: $StartTime `r`n`r`nEnd Time: $EndTime `r`n`r`nList of Sharepoint files copied is attached"
Send-MailMessage -To me@myjob.com -From administrator@myjob.com -Subject $Subject -Body $Body -SmtpServer mailserver.myjob.com -attachments $TempFile
Remove-Item $TempFile
By the way, you will need to run this script from a server that has the Sharepoint Powershell extensions installed. Your Sharepoint server is the easy choice; I didn't even look into whether I could install the extension elsewhere, because I tried to go down that rabbit-hole with Sharepoint 2010 and spent way too much time fighting with it.
Click an Ad
If you find this blog helpful, please support me by clicking an ad!
Wednesday, April 10, 2013
Tuesday, March 19, 2013
How To Set Up KMS To Activate Office 2010
A friend and co-worker of mine battled with getting our KMS host to activate Office 2010 computers recently. Despite being fluent in Google searching, she just couldn't piece it all together until the end, when she finally succeeded. She graciously gave me permission to post this here. Most of the following is in her words, but I've cleaned things up a bit. Thanks Z!
To activate the KMS licenses for a specific product follow these steps. The only change that may be necessary is the key that starts with bfe in this documentation may need a new number to correspond to the software for which you will be installing. This number refers to Office 2010 standard specifically.
1. Office 2010 Standard SP1 License Activation - KMS Server
Go to the MSVLC website and download the Office Std 2010 Key management service host
It is an ISO so save it somewhere and then and have it mounted.
2. On the KMS host server.. run the KMS host executable file which you mounted from the ISO.. it will come up and say:
Microsoft Office 2010 KMS Host licenses installed successfully.
Would you like to enter a Microsoft Office 2010 KMS host product key and proceed with internet activation now?
3. Click yes and enter the key provided from your MSVLC under the product for which you will be installing:
We installed the key for Microsoft Office 2010 Standard SP1
Then it kicked back a message that says:
Micosoft office 2010 KMS Host License Pack in the header
Microsoft Office 2010 KKMS host product key has been successfully installed and activated.
For KMS host configuration options, see Slmgr.vbs.
OK
I clicked ok.
Then go to
C:\Windows\system32> cscript slmgr.vbs /dlv bfe7a195-4f8f-4f0b-a622-cf13c7d16864
(NOTE: the bfe7a….. is the key for Office 2010 specifically, there maybe a different key for 2013)
When the data is returned you will need to know the following information:
Activation ID: bfe7a195-4f8f-4f0b-a622-cf13c7d16864
Installation ID: 022076-474424-662791-738941-933420-822321-034281-859076-352701
Now call Microsoft at: 1-888-725-1047 and you will have to input the installation ID and when confirmed they will provide you with a confirmation ID.. write it down….
Confirmation ID: 373006-236172-104250-501940-094242-708836-908992-285622
Now go back to the cmd prompt and type:
Cscript slmgr.vbs /atp <ConfirmationIdWithoutDashes> <ActivationID>
Enter
It should come back with Confirmation ID for product <ActivationID> deposited successfully.
Here is the Microsoft Technet page that deals with all of this, if you'd like to read some drivel.....
Tuesday, March 12, 2013
Unending Cycle of .NET Updates
So recently installed .NET Framework 4 on my Windows 7 laptop because I needed it for a new app. Afterwards I naturally needed to install a slew of updates from our WSUS server to go along with it. Two of them said they would install, but would pop back up as needing to be installed later on. I looked into it, and the KB numbers aren't important because it could happen with any of them.
What IS important is how to fix this irritating issue. On some of the forums I visited, people had run the installation 30 times or more with the same result! I facepalmed when I read that you need to use the venerable .NET cleanup tool to remove the offending numerical .NET version you are having issues with, and then reinstall from the standalone installer which you can get from Microsoft's downloads page.
I have used the .NET cleanup tool, which you can find at Aaron Stebner's blog here, before to clean up all kinds of .NET shenanigans - weird error messages and the like, so I can't believe I overlooked this great tool when confronted with a .NET issue.
Nuke it from orbit, it's the only way to be sure.
What IS important is how to fix this irritating issue. On some of the forums I visited, people had run the installation 30 times or more with the same result! I facepalmed when I read that you need to use the venerable .NET cleanup tool to remove the offending numerical .NET version you are having issues with, and then reinstall from the standalone installer which you can get from Microsoft's downloads page.
I have used the .NET cleanup tool, which you can find at Aaron Stebner's blog here, before to clean up all kinds of .NET shenanigans - weird error messages and the like, so I can't believe I overlooked this great tool when confronted with a .NET issue.
Nuke it from orbit, it's the only way to be sure.
Friday, February 22, 2013
Meet My New Dell Equallogic PS6500ES SAN
I don't have anything big to write about, but I've got a lot of small tips and trick and info things bookmarked in my "To Blog About" list. So, I thought I would get a few of them out of the way.
What I've been doing the past few months is implementing a new SAN and preparing to upgrade all 5 of my VMware ESX 4.1 hosts to version 5.0. Whenever I say that, someone will inevitably pipe up and say "why don't you just go to the newest 5.1 release?". The reply is that my servers aren't on the HCL for that version. They're not THAT old though - HP ProLiant DL385 G5's. I'll upgrade in a couple of years when my hosts get replaced as part of my 5-year cycle. These are doing just fine performance-wise.
The search for a new SAN has been really frustrating, because there are SO many options. In the end, I just wanted shared iSCSI storage that would meet some space and performance goals (Veeam ONE monitoring helped a great deal here), and that would replicate across a 1Gbps WAN fiber link to our other facility across town. Another consideration was that because I'm no storage expert, we needed something that wasn't too complicated and that was widely used, allowing me to ask questions in forums and get responses from people familiar with that storage platform.
I am no storage specialist, and comparing SAN solutions offered by different vendors was pretty challenging. I have to wear many hats and really can't specialize in anything in my current role. It was frustrating to know that there aren't a lot of solid metrics that you can use to compare different storage solutions, unless you get the hardware in-house and run your own tests. I was operating on IOPS and latency numbers until I learned that vendors can throw up whatever numbers they want and they would be true; you have to look at how the tests are run (random vs sequential, data block size, etc). Here are some really good reads I ran across while making my decision. They also got me up to speed on server storage in general:
Pointing out the IOPS fallacy:
http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/
This article outlines the folly of using RAID5 with a hot spare. As a result of this article, all of my local storage is RAID10 from now on, as it's the safest for my data.
http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess/
This Techrepublic article got me up to speed on different types of drives and their performance difference:
http://www.techrepublic.com/blog/networking/how-sas-near-line-nl-sas-and-sata-disks-compare/5323
In the end, it came down to EMC vs Dell. Price and usability were the main concerns. We decided we wanted to fortify our SAN performance with SSDs and auto-tiering, which automatically moves "hot" data blocks up to SSD storage for better performance. On an even pricing level, Dell offered over 2 TB worth of SSD space, while the VNX recommended to us only had 200GB. Another big difference between the two (and this is just my take) is that EMC SANs seem to be designed for use by an actual storage engineer. Sure, EMC will point you to the VNXe line, but we're past that in terms of performance/capacity/options. I want my SAN to be set it and forget it. In the end, we bought an Equallogic PS6500ES. It was installed last Friday.
I moved about 10 testing VMs onto the storage and was looking at performance in SAN HQ. SAN HQ is Dell's SAN monitoring software, which is very nice, and very easy to use. I wish I could dig a little deeper (as far as auto-tiering goes), but it is what it is. What I found was some pretty terrible latency numbers! With my migration less than a week away, I went into panic mode and called my Dell Storage reps to find out why my performance sucked. Here are the IOPS and latency graphs I was seeing (acceptable latency is below 20ms):
The Dell reps talked me down off the ledge and said that because my SAN wasn't doing anything (note the low IOPS - in production this thing will be humming along at 2-3000), the hard drives were having to fire up to serve my I/O requests from scratch. I fired up a few VMs with IOMeter, which allows you to do some pretty neat I/O benchmarking tests and followed this guide (second paragraph from the end, and it is just a quick how-to on IOMeter) to create a boatload of I/O. IOMeter is a really neat app. Not only does it create I/O, but you can specify the size of the blocks, read/write percentage, and whether they are random or sequential.
Sure enough, things changed quite a bit:
At around 6000 IOPS, I set off warnings that I had saturated my 3 1Gb iSCSI links. While you can't really tell because of the scale of the graph, my latency stabilized at around 12ms during heavy read and write activity. In case you're interested, this load was 60/40 read write with 64K blocks and all random.
Here's another nice How-To style article on IOMeter.
What I've been doing the past few months is implementing a new SAN and preparing to upgrade all 5 of my VMware ESX 4.1 hosts to version 5.0. Whenever I say that, someone will inevitably pipe up and say "why don't you just go to the newest 5.1 release?". The reply is that my servers aren't on the HCL for that version. They're not THAT old though - HP ProLiant DL385 G5's. I'll upgrade in a couple of years when my hosts get replaced as part of my 5-year cycle. These are doing just fine performance-wise.
The search for a new SAN has been really frustrating, because there are SO many options. In the end, I just wanted shared iSCSI storage that would meet some space and performance goals (Veeam ONE monitoring helped a great deal here), and that would replicate across a 1Gbps WAN fiber link to our other facility across town. Another consideration was that because I'm no storage expert, we needed something that wasn't too complicated and that was widely used, allowing me to ask questions in forums and get responses from people familiar with that storage platform.
I am no storage specialist, and comparing SAN solutions offered by different vendors was pretty challenging. I have to wear many hats and really can't specialize in anything in my current role. It was frustrating to know that there aren't a lot of solid metrics that you can use to compare different storage solutions, unless you get the hardware in-house and run your own tests. I was operating on IOPS and latency numbers until I learned that vendors can throw up whatever numbers they want and they would be true; you have to look at how the tests are run (random vs sequential, data block size, etc). Here are some really good reads I ran across while making my decision. They also got me up to speed on server storage in general:
Pointing out the IOPS fallacy:
http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/
This article outlines the folly of using RAID5 with a hot spare. As a result of this article, all of my local storage is RAID10 from now on, as it's the safest for my data.
http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess/
This Techrepublic article got me up to speed on different types of drives and their performance difference:
http://www.techrepublic.com/blog/networking/how-sas-near-line-nl-sas-and-sata-disks-compare/5323
In the end, it came down to EMC vs Dell. Price and usability were the main concerns. We decided we wanted to fortify our SAN performance with SSDs and auto-tiering, which automatically moves "hot" data blocks up to SSD storage for better performance. On an even pricing level, Dell offered over 2 TB worth of SSD space, while the VNX recommended to us only had 200GB. Another big difference between the two (and this is just my take) is that EMC SANs seem to be designed for use by an actual storage engineer. Sure, EMC will point you to the VNXe line, but we're past that in terms of performance/capacity/options. I want my SAN to be set it and forget it. In the end, we bought an Equallogic PS6500ES. It was installed last Friday.
I moved about 10 testing VMs onto the storage and was looking at performance in SAN HQ. SAN HQ is Dell's SAN monitoring software, which is very nice, and very easy to use. I wish I could dig a little deeper (as far as auto-tiering goes), but it is what it is. What I found was some pretty terrible latency numbers! With my migration less than a week away, I went into panic mode and called my Dell Storage reps to find out why my performance sucked. Here are the IOPS and latency graphs I was seeing (acceptable latency is below 20ms):
The Dell reps talked me down off the ledge and said that because my SAN wasn't doing anything (note the low IOPS - in production this thing will be humming along at 2-3000), the hard drives were having to fire up to serve my I/O requests from scratch. I fired up a few VMs with IOMeter, which allows you to do some pretty neat I/O benchmarking tests and followed this guide (second paragraph from the end, and it is just a quick how-to on IOMeter) to create a boatload of I/O. IOMeter is a really neat app. Not only does it create I/O, but you can specify the size of the blocks, read/write percentage, and whether they are random or sequential.
Sure enough, things changed quite a bit:
At around 6000 IOPS, I set off warnings that I had saturated my 3 1Gb iSCSI links. While you can't really tell because of the scale of the graph, my latency stabilized at around 12ms during heavy read and write activity. In case you're interested, this load was 60/40 read write with 64K blocks and all random.
Here's another nice How-To style article on IOMeter.
Subscribe to:
Posts (Atom)