IT Stuff I Learned Today
Stuff I learned as a System Administrator.
Click an Ad
If you find this blog helpful, please support me by clicking an ad!
Thursday, October 28, 2021
Increase in Password Spraying Attacks
Sunday, October 17, 2021
I'm Back, with my Master Inventory Database methodology
I’m going to start writing again!
I was able to pivot my career from a Senior System Admin into a Security Engineer role for my organization a few years ago. I was very busy and kind of let this blog go while I sunk my teeth into building and managing organizational standards related to the CIS 20.
I learned a lot in my journey, and I have a lot of ideas for new articles that may help you in getting your organization up to snuff with regards to security.
One of my primary passions during this time was building a Master Inventory Database. I did this by pulling data from various sources within my environment, combining the data, and then asking questions of the data. I built a suite of Powershell commands and deployed these through profiles that could be used by various roles within the organization to view role-relevant data in a single pane of glass.
To retrieve this data, I mostly used tool-specific automated reporting (at midnight the tool exports its inventory to a CSV file, which I would then ingest), though I used Powershell where modules existed for the tool (WSUS/DNS/AD/DHCP), and I was in the process of branching out into querying APIs before I left the position.
DATA IS THE KEY. If you have the data, you can present it in a way that allows management to make good decisions for your organizations, tailored to risk appetite and security program maturity.
Controls 1 and 2 of the CIS 20 deal with inventory. If you start reading through the rest of the controls, you really can’t say that you conform to many of the other controls without having first identified every device and piece of software that exists within your environment. This Master Inventory Database project aimed to thoroughly satisfy those first two controls and pave the way for the implementation success on the rest of them.
1.
DNS
a.
Every system should have a DNS record, though
logic should exist to weed out client systems that may be transient. The DNS
record usually informs the other systems mentioned below on the name of a
system.
b.
The DNS name should conform to a naming standard
(note that this initiative may take years to accomplish – it’s much easaier to
change system names on upgrade or replacement that renaming systems)
c.
Does a record exist?
d.
Does the system have a PTR record?
e.
Are there any aliases?
2.
Basic network queries:
a.
Is ping successful?
3.
Client Management System, which hopefully
includes third party patch management:
a.
Form Factor/Bitlocker Status
i.
Protected Mobile System or not?
b.
RAM
i.
Standardize/Replacement Info Purposes
c.
CPU
i.
Standardize/Replacement Info Purposes
d. Make
i. Standardize/Replacement Info Purposes
e.
Model
i.
Standardize/Replacement Info Purposes
f.
HDD Space
i.
Ensure there’s space to start saving logs
g.
Last Logon User
i.
Who is using this system?
h.
Installed Software
i.
Have an idea of what’s running in your
environment, so when you’re going through the days’ news you can identify
things that may affect you. For example, seeing the headline “Google Chrome
releases patch for 0-day vulnerability”.
ii.
Licensing
iii.
Are all systems running the software required by
IT Operations/Security (Agents, Antivirus, etc)
4.
AD Computers
a.
Is the Bitlocker key successfully stored here?
b.
Last Logon Time
c.
Who is the “owner” of this system?
5.
AD Users
a.
Is there an employee number on file to verify
identity for helpdesk calls?
b.
Is the user account a member of any special
groups (Domain Admins, etc)?
c.
Is there a manager listed for out-of-the-ordinary
requests, such as user requesting access to a share, or forgetting their
employee ID during identity verification?
6.
Antivirus
a.
Has the system checked in recently?
7.
WSUS Status
a.
Is the system patched?
b.
Has it reported recently?
8.
Vulnerability Management Reports (I’ve worked
with Rapid7 Insight IVM)
a.
System risk
b.
Open Ports (Can also get this through automating
nmap scans)
c.
Configuration Standard scanning results
(CIS/DISA/STIG)
d.
Software Installed
i.
Have an idea of what’s running in your
environment, so when you’re going through the days’ news you can identify
things that may affect you. For example, seeing the headline “Google Chrome
releases patch for 0-day vulnerability”.
ii.
Licensing
iii.
Are all systems running the software required by
IT Operations/Security (Agents, Antivirus, etc)?
9.
Wireless Network
a.
SSID Connection/VLAN
10.
DHCP
a.
What’s the system’s IP address
b.
In what DHCP zone does the system pull from?
11.
Network Scanner (ManageEngine OpUtils, NetDisco,
etc)
a.
What switch port is the system plugged into?
b.
VLAN
12.
Other things you could assess, but I didn’t get
there:
a.
Are Backups present, up to date, and successful?
b.
Is the system being monitored for outages?
c.
Browser Plugins and/or Office Add-Ons Installed
d.
O365 user info/Conditional Access/Licensing/Sensitive
email groups
e.
Local registry settings, cross referenced with
group policy since all GPOs do is set registry settings
f.
System User Rights assignment
i.
Who’s a local admin?
ii.
Make sure that sensitive rights are appropriate
(who can log in as batch user)
g.
System logging configuration
i.
Best configured via group policy
h.
Windows Firewall status
i.
Best configured via group policy
i.
System Shares and permission settings
i.
Tell me ‘Everyone’ doesn’t have access
ii.
Client systems shouldn’t have shares, typically
j.
Any other tooling you may have that contains information
of value that has an API or reporting capability. For instance, you could pull Security
Awareness Training records and phishing test results in to help identify your
riskiest users and tailor future training accordingly.
Once you get a handle on even some of the above, you can
start sorting systems into groups and creating standards (security or otherwise).
You can build reports that you can hand to operations to resolve. For example:
a.
These mobile systems don’t have Bitlocker enabled.
b.
These systems don’t have antivirus installed.
c.
These systems don’t have a DNS name – what are
they?
d.
These systems still have Adobe Flash installed!
e.
These systems have Office 2010 installed!
f.
These systems have open Telnet ports.
g.
These systems haven’t installed this months’
patches yet.
h.
These systems don’t have our web filter agent
installed or proxy set correctly for protected web access.
i.
These systems haven’t been seen by <Insert
tool here> for X Days (meaning their agents are broken, or they aren’t
checking in, or maybe the computer isn’t being used). Such a rule exists to
make sure you don’t have some device coming back online after 3 months and not
have the appropriate patch levels. I didn’t have a network posturing capability….
j.
These users have high level access – do they
need it?
k.
These systems have an agent that’s out of date.
Any exceptions to your standards should be tracked in a risk register and reviewed regularly with management.
Remember, it is up to management to assume (do nothing), remediate (fix), mitigate (lessen the risk, typically by putting such devices into their own VLAN), or transfer the risk (insurance, out-sourcing management).
I feel like most of the stress I’ve encountered in my career is due to this risk. I know the risk is there, and I’ve communicated this risk to management in a data-based fashion (keeping FUD to a minimum, which is a nebulous line).
Theoretically I should be off the hook, psychologically. HOWEVER, where this breaks down is that historically I’ve been the individual responsible for incident response.
So, the way this plays out in my brain a lot of the time is that I’ve given the powers that be all the data, and they’ve decided to accept the risk, BUT I’m the one waiting for the 3am phone call that we’ve been compromised, possibly because of the flaw I identified.
I’m still wrangling with this. In my security engineering
role, I let the stress get to me. I took the stress out on my family and made some
very bad life choices. Ultimately, I ended up quitting my job for my mental health.
I urge you to try to come to terms with the fact that you
may not be able to control your security environment, and you need to steel
yourself from the fallout that may land on you because of it.
Saturday, September 24, 2016
AD Sites and Services: Show Services
So I just went in to reauthorize it, and I get a very helpful (ahem) message that tells me that "The specified servers are already present in the Directory Service".
Yay Google, turns out that there's a very cool part of Active Directory Sites and Service that I'd never even seen before!
From a DC, open up Active Directory Sites and Services. Normally this is where you do all of the fancy site replication stuff if you have multiple AD sites, but if you highlight the root of the structure, choose the "View" menu, and select "Show Services Node", there's a lot more to see.
Most of this stuff I wouldn't touch without explicit instructions, of course, but still neat. You can see Exchange stuff, and your certificate info under the Public Key Services folder.
What I needed to do was to delete my server's entry under the NetServices folder, and then I was able to again authorize my DHCP server.
Friday, June 24, 2016
Gathering Important NTP Settings with Powershell
One thing I'm much more cognizant about is putting any constant variables at the TOP of my scripts. This allows me to more easily reuse my scripts, and also to change the variables quickly without having to look all over the place.
#BEGIN SCRIPT
<#
REQUIRED: Make a folder called c:\lists, and put a file in it named ServerNTPSettingsAudit.txt that contains your servers' names (one per line). Also, this script assumes that you have a C:\temp folder.
#>
#Variables
$List = "C:\Lists\ServerNTPSettingsAudit.txt"
$Attachment = "C:\Temp\NTPSettings.csv"
#Email Variables
$To = "reporting@contoso.com"
$From = "reporting@contoso.com"
$SMTPServer = "mail.contoso.com"
$Subject = "PS Report - NTP Settings Audit"
$Body = "See Attached"
#Get the list of servers
$Servers = Get-Content $List
#Create an empty array to hold the data
$NTPSettings = @()
#Foreach server, get some NTP settings from the registry (remotely, obviously)
Foreach ($Server in $Servers){
$HKLM = 2147483650 #HKEY_LOCAL_MACHINE
$reg = [wmiclass]"\\$Server\root\default:StdRegprov"
$key = "SYSTEM\CurrentControlSet\Services\W32Time\Parameters"
$value = "Type"
$NTPType = $reg.GetStringValue($HKLM, $key, $value) ## REG_SZ
$key = "SYSTEM\CurrentControlSet\Services\W32Time\Config"
$value = "AnnounceFlags"
$NTPFlags = $reg.GetDWordValue($HKLM, $key, $value) ## REG_DWORD
$key = "SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NTPServer"
$value = "Enabled"
$NTPServer = $reg.GetDWordValue($HKLM, $key, $value) ## REG_DWORD
$ServerItem = New-Object System.Object
$ServerItem | Add-Member -type NoteProperty -Name "Server Name" -value $Server
$ServerItem | Add-Member -type NoteProperty -Name "NTP Type" -value $NTPType.sValue
$ServerItem | Add-Member -type NoteProperty -Name "AnnounceFlags" -value $NTPFlags.uValue
$ServerItem | Add-Member -type NoteProperty -Name "IsNTPServer" -value $NTPServer.uValue
$NTPSettings += $ServerItem
} #End Foreach
#Export the array to CSV
$NTPSettings | Export-csv -NoTypeInformation $Attachment
#Send me the list as an email attachment
Send-mailmessage -To $To -From $From -SmtpServer $SMTPServer -Subject $Subject -Body $Body -Attachments $Attachment
#Delete the temp file
Remove-Item $Attachment -Force -ErrorAction SilentlyContinue