Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

HickBoy last won the day on July 22 2018

HickBoy had the most liked content!

Community Reputation

4 Neutral

My Information

  • Agent Count

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just confirming this works like a champ. We have multiple systems that respond to ping but are not usable by a human. Things like the shares are inaccessible or a user cannot logon remotely or via console. This monitor catches them.
  2. FYI, I was unsuccessful at importing the SQL statement. Could you please provide some screenshots of the actual monitor so I could manually create it instead as I am having trouble deciphering some of the sql allowing me to recreate it. I also tried modifying the following line with no luck. /*GroupID*/856 /* 856 is the group ID for "Service Plans\Windows Servers\Managed 24x7" */ to /*GroupID*/,856 /* 856 is the group ID for "Service Plans\Windows Servers\Managed 24x7" */ NOTE: I am on hosted automate w/CW and no longer have direct access to SQL to view any errors.
  3. Wow... that's a great report. It does work and is a great canvas to add some features. Thx for pointing me in that direction...
  4. Anyone Created a decent DataView or Report template they want to share that pulls from their EDF's?
  5. Great idea...I added the following to the first line of each script to squelch extra error messages from the script polluting the result EDF. I am also working on a report to pull the data from the EDF in spreadsheet format for easy viewing. $ErrorActionPreference= 'silentlycontinue';
  6. I assume you have the Active Directory plugin installed? Yes And you confirmed the views are showing up and there is data in them? Yes
  7. Following the updated directions I am now able to successfully create the Tables/Views via the ADViews.sql file but importing the .REPX files continues to fail. I can successfully import the SubPageHeaderLandscape.REPX file as a Report or Sub-Report but the other three reports fail to import. I get the same error as @Rafe Spaulding. The reports appear to start the import and I get a spinning disk for about 2 minutes and then I get the error message.
  8. Same issue as others. I cannot import the SQL code correctly as I get an error and if I type it all in manually it imports. Even after typing it all in, I still cannot import the REPX files as I get the same error message: ""There was an error loading the report. Please verify that the report is in a valid format and try again."
  9. I've hacked one together that works fairly well (albeit not an officially supported uninstall method from Symantec). We use this as a last resort when the official ways to remove does not work... This uses CEDAR from Symantec to clean the agent from the system. I do some other stuff such as pre-removing the registry values that usually stop Symantec from uninstalling when it complains about pending actions. I do this via an embeded batch file called from the LabTech Script: :: 04/02/2019 :: Clears PendingFileRenameOperations Registry Key to allow Symantec AntiVirus to Uninstall :: Delete Key REG Delete "HKEY_LOCAL_MACHINE\System\currentcontrolset\control\session manager" /v PendingFileRenameOperations /f 2> nul Here's the two important parts of the script (NOTE: The -silent is case sensitive and undocumented). Let me know if you want the full thing and I will clean it up and export it.
  10. Darren is the master of the craft so maybe my poorly written monitor could be something improved upon. I have attached some screenshots and the code below. My monitor is designed to be reviewed on a HUD where I look to see what application is eating up memory for the customer. It's not running on all customers as I only deploy it when I am trying to track down what process is using large amounts of memory and allows me an instant view into multiple computers via a dataview/hud. The monitor does not create any alerts, but rather uses PowerShell to grab the process currently using the most RAM on the system and just spits out the process name and the value in MB. It's not super smart and does not handle things like 14 Chrome processes each using 250 MB per process but rather whatever single process is using the most RAM. There's probably a better way to do this but its worked for me in finding customers where I can cross reference another monitor I use that checks for >80% CPU usage over 20 minutes, etc. Command line for monitor: %windir%\system32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy BYPASS -command "(Get-Process | where {$_.ProcessName -notmatch 'Memory Compression'} | Sort-Object WorkingSet | Select-Object Name,@{n='Value';e={[Math]::Round($_.WS / 1MB,0)}},@{Name='Mem';Expression={'MB'}} -Last 1 | Format-Table -hidetableheaders | Out-String).trim()" Command if you want to just test on any computer in PowerShell: Get-Process | where {$_.ProcessName -notmatch 'Memory Compression'} | Sort-Object WorkingSet | Select-Object Name,@{n='Value';e={[Math]::Round($_.WS / 1MB,0)}},@{Name='Mem';Expression={'MB'}} -Last 1 | Format-Table -hidetableheaders
  11. Here's an updated version that works with Symantec Endpoint Protection Cloud, which is the version that will soon replace the End-of-Life version called Symantec Endpoint Protection Small Business Edition that shows up in Add/Remove program as "Symantec.Cloud" to confuse just about everyone out there...(Thanks Symantec)... The only difference between this and Gavsto's previous post is the AP Process name has changed in the new version to SCS* I tested this on both a server OS and workstation OS as there's new billing codes for each version now, yet both seem to use the SCS.EXE for running the A/V.
  12. Hopefully this post will revive some life into this idea. For anyone interested, here's my TreeSizePRO script that I use to get disk-space reports for our customers. This script is pretty old so I went and wrote some documentation on how to get it installed and working correctly. I hope it can help anyone out there looking to use TreeSizePro with Automate and possibly give people ideas on how to make it better. The TreeSizeProAutomateScript.7z file attached to this post includes the XML script and PDF instruction guide on how I configured my system. I am sure there's a million better ways to set this up and your welcome to hack away to your hearts content. Some important points before you download: The script requires a licensed copy of TreeSizePro as the FREE version does not support passing command-line parameters. You need an OLD version of TreeSizePro (Pre 6.2) as anything newer has the following limitations: Newer versions will not run on WinXP/Win2003 Newer versions require .NET 4.5 pre-installed Newer versions require the portable version to be run from a USB drive The old version that I use with my script ( 32.bit) does have have any of the above limitations. You will need to contact JAM software (URL in the Documentation) to obtain the legacy version. The script is not Plug and Pray. You need to modify two lines to confirm the location of where you are storing the TreeSize.EXE and TreeSize.INI files. You need to modify the location of where you are storing the 7za.exe file that zips up the reports and prepares them for e-mailing to you. Here's an overview of what the script does: Runs TreeSizePro silently on the users computer. Creates 3 files A %COMPUTERNAME%_FILES.XML that you can load into Excel to see the largest files. A %COMPUTERNAME%_FOLDERS.XLSX that you can load into Excel to see the largest folders. A %COMPUTERNAME%_TREE.XML that you can load into any copy of TreeSizePro to review the computer in detail. Once complete, the three reports are zipped up (using 7-Zip) and e-mailed to the tech who ran the script. Example Largest Files: Example Largest Folders: Example TreeSize.XML report: The Script: Attachment Updated: 7/22/2018 @ 2:00 p.m. PST TreeSizeProAutomateScript.7z
  13. We have the same issue but not just service monitors. We are also seeing this with SNMP remote monitors. If a power supply fails and is subsequently replaced, the monitors will occasionally get stuck in the failed state. Running the test shows them in a success state just like your example. We've had a ticket open since Nov 2017 on the issue but started seeing it in October 2017. Ticket #9588384 in the LabTech support if you need to refer to a similar issue. We also just got upgraded from 11 to 12 and I have not seen this issue since we have gone to 12. It was very sporadic and hard to get LabTech to see the issue in real-time. The fix was always to delete/re-create the monitor. Hope that helps to let you know your not alone with the issue. NOTE: If it does appear in LT12, I will post back that the issue was not resolved for us. Here's an example of an SNMP Remote Monitor that shows as failed, but is actually healthy:
  • Create New...