Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

My Information

  • Agent Count

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We recently put nginx as a reverse proxy in front of the LT server (and enforce TLS with it). Fail2Ban is running on the nginx install so that provides a reasonable level of peace of mind. We also have a IP whitelist for access to the /labtech/transfer path (the ltshare) and update the whitelist based on all routeraddresses in our system. Not sure which ports you're exposing, but we are only proxying 80 and 443. The ports for tunnels and whatnot are still forwarded straight to the LT server, but those should be difficult to exploit (as long as you aren't exposing 3306 and the like)
  2. GorillaBiscuit, you have to import the xml, delete the script that it creates, then import again. The import is creating a race condition between generating the script and generating the EDFs for the script Also, more generally, I've had to tweak the powershell scripts that check backlog counts. in dfsr_backlog_lt i changed line 27 to both include getting the script starttime as well as cleaning the json data (LT inserts escape chars for the quotes and whatnot: $scriptStartTime = get-date $GroupData = [string]([regex]::Unescape($(Get-Content $JSONGroups))) | ConvertFrom-JSON later down, on the new line 50 (since we inserted a line or two), I changed the start-job command to gie the job a name as well as not output to the pipeline: $null = Start-Job -ScriptBlock $JobBlock -Name DFSGroupCheck finally, on line 58, i am getting only these jobs by name that have run since the script has started: ForEach ($Job in $(Get-Job -Name DFSGroupCheck -After $scriptStartTime)) I also made changes to the dfsr_folders_lt.ps1 file on line 6 (though I'm not sure this script gets called by anything: $folderData = [string]([regex]::Unescape($(Get-Content $folderJSON)) | ConvertFrom-JSON
  3. Just want to thank you for this simple but effective (and clever) set of scripts / monitor One point: why not just make the one group and have a search filter for the backlog check?
  4. Any thoughts on using rbenv? https://gorails.com/setup/ubuntu/14.04 I'm working on it right now. Seems like it's going to work
  5. I agree with starbucksgold on feature requests. Couple things to add The option to limit user access via user classes. I am not familiar with the way plugins behave with user classes. One way i could see would be to have two plugins: one for the backend application definitions, and one for actually displaying the tabs in the various sections of the control center. That way we could give access to one but not the other. Add conditional displays. I.E. an application is defined to show up on the computer level, but only if a particular field matches a certain value Otherwise can't wait to build more things in it. Looks really promising!
  6. Pretty sure the labtech script i posted above doesn't require an agent on the LT server, but I could be wrong.
  7. I'm looking to alert if the O365 password is not correct. I don't see any handling for that scenario. I'd really prefer not to modify things if I don't have to. I see three options: 1) it's already there and I just missed it 2) the functionality is added and I import the new set 3) I create a separate script to check just for that and create a ticket if the password doesn't work Any thoughts?
  8. I went ahead and implemented a form of this by just hardcoding the most chatty event IDs. Here's a screenshot of my client script that I've scheduled to run every 10 minutes (dashboard > maintenance > scheduled client scripts) This might be slightly more inefficient than just removing the items, but it's the only way i could see to list how many were being deleted each time. Another thing to note is that I included eventid 1. Best I can tell this is mostly from CRON logs on linux machines, but the fact that I'm limiting to just security logs makes me care very little about purging them. Also, to throw some numbers into the mix, we had 7 days of retention and this was 14 million rows. I've since rolled that back to 3 days of retention... we'll see if labtech properly purges the logs.
  9. If you run it directly in powershell, you would of course have to paste only the data after "-command". Do you get any kind of error message? As for creating the monitor, you will want to do an exe monitor. Put the full powershell path in the first blank, and put the rest ( "-command ...." ) in the second blank
  10. In my experience, importing LT xml is clunky and prone to breaking things / having monitors kick off before things are working (also never exported one myself). Is there something in particular that you cannot re-create based on my original description? Should be pretty straightforward to create the scripts, then create the monitor to reference the monior script.
  11. Absolutely. I forgot I posted this since nobody had commented... I've since made a slight tweak and I'll go and edit in the original post. This monitor was catching VSS snapshots that are created by shadowcopy schedules, so I now filter by the snapshots not being "ClientAccessible". Also, since we've caught up on the old snapshots, I am about to add a script to run on the monitor script "if" section to delete the oldest non-ClientAccessible snapshot... hopefully automate away the tickets altogether
  12. I'd like to share a system I've implemented to solve a problem that we had (and you might have without knowing it). Problem: Our backup software has a timeout of 10 minutes when taking VSS snapshots. We have / had issues in our virtual environment where Unitrends would initiate a backup, wait for 10 minutes, then fail the backup. We would get a ticket and re-issue the backup no big deal. The problem is that windows was told to take a vss snapshot and it still went through with that. This means that we have vss snapshots that have the attributes set to not purge that our backup software doesn't know about. Ocassionally we would get a ticket about a disk filling up, and we would track it down to be VSS storage. The usage would be handled and we'd move along. I was not content with not knowing why we would ocassionally have filled disks from VSS, so I investigated and found the above to be true based on circumstantial evidence (failed backups with start and stop times encompassing the VSS snapshot time). Solution: Delete the snapshots! I started by just running a command against all our servers "vssadmin list shadows" and exported all that data to a spreadsheet to see what the scope of this issue was. Turns out we have snapshots going back as far as late 2014 across over 90 servers consuming over 2.5TB total on our SAN. This is going to take a lot of resources to resolve, but we also need a way to find the problem machines moving forward. I broke down the system into manageable pieces. Each piece makes the process a little simpler than the last. 1) Remote monitor to find the machines. I created a one-liner in powershell to list the oldest vss snapshot older than a week, and trigger an error condition: c:\windows\system32\windowspowershell\v1.0\powershell.exe -command $($datetime = $(get-wmiobject win32_shadowcopy | ?{-not $_.ClientAccessible}).installdate ; if($datetime){$date = [management.managementDateTimeConverter]::ToDateTime( ($datetime | sort | select -f 1) ) ; if($date -lt (get-date).AddDays(-7)){write-output 'Oldest_',$date}else{write-output 'All good!'}}else{write-output 'All good!'}) 2) Notice the alert template / monitor script. I created a clone of other monitor scripts that does basic checking for alerting being disabled then runs a script to give a detailed list of VSS snapshots on the system with three columns: Volume name | Snapshot ID | Snapshot Date the output of that data is stored as a variable @PSOutput@ The ticket body is the following: Monitor script VSS Snapshot list script line 8 writes a file to @PS1Path@ with the following contents 3) I wrote a script to delete a particular snapshot set ID and then list the remaining snapshots (same list script that is called from the montior script). I made it isolated and injected a hard-coded 3-minute wait for safety (so someone couldn't run all the snapshot sets at once and bork performance on a particular server I can share specific syntax that people are asking for, but I don't feel like exporting the LT xml is a good idea, since my scripts reference a lot of internal function scripts that I've written and probably makes importing more complex than just rewriting based on the screenshots above and tweaking to fit your needs. Edit: changed some of the powershell one-liners to exclude ClientAccessible snapshots > | ?{-not $_.ClientAccessible}
  13. Or how about just a script on the desktop that does something like this: powershell.exe -command $(Get-Process -IncludeUserName | ? username -like "$($env:USERDOMAIN)\$($env:USERNAME)" | ? name -like "msaccess*" | stop-process)
  14. Oh I see. You are planning on replacing every drive this monitor returns. You should have higher priority if the result is closer to (or above) 1?
  15. So let me see if I understand this correctly. This monitor is returning drives that have had at least one Reported_Uncorrectable_Errors, and displays that drive as a ratio of its uptime out of 2 years? Basically you're saying that if a disk has a Reported_Uncorrectable_Errors, it has 100% failure rate after 2 years?
  • Create New...