Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 10/14/17 in all areas

  1. 13 points
    Ensure PowerShell version 4.0 or higher is installed on the LabTech Server itself (Needed for Invoke-RestMethod to not prompt for credentials) Create the folder C:\Program Files\WindowsPowershell\Modules\LabTech on your LabTech server and place LabTech.psm1 in it Import Update-MissingWarranties.xml script into Automate, ignore warnings about version mismatch 11 - Tools->Import XML Expansion 12 - System->General->Import->XML Expansion Unrestrict the script execution policy on your LabTech Server Set-ExecutionPolicy Unrestricted -Confirm:$false -Force Open Immense Networks Scripts->Update-MissingWarranties, copy the value of the PowershellCode variable into Powershell ISE on the LabTech server and run it. There should be no errors about script execution or modules not found. You may get a rate limit error from the API, but that's relatively normal. Open your _System Automation->Onboarding->Initial System Configuration - Partner Script and insert a Script Run step that runs the Immense networks Scripts->Update-MissingWarranties script Open the Manage Plugin 11 - Click the Connectwise button on the top of the main LabTech Interface 12 - System->Manage Integration Set Manage Plugin to map the following fields (See screenshot below) PurchaseDate to Computer.Asset.TagDate WarrantyExpiration to Computer.Asset.WarrantyEndDate Update 2018-11-09: Refactored code, implemented bulk lookups with Dell API. Attempted to fix HP lookups. Works in small batches. Updated 2018-11-26: Fixed issue with function responsible for breaking computers into chunks of 100. It was returning a flat array with all objects. Modified trim functionality to only trim serial numbers that require trimming. Updated 2019-03-28: Filtered out Dell Digital Delivery entries from warranties *Note, you will have to remove all the current Dell warranty info in your database and re-run this script to get accurate data update labtech.computers set warrantyend=null where biosmfg like '%dell%'; Updated 2019-03-28 (2): HP: Added Part Number and Serial Number to HP API Requests as we found the HP warranty API usually needs both to find the warranty Dell: Updated AssetDate to be the ShipDate value that is returned from the API (It was previously returning the start date of the warranty with the latest end date) This will require you to clear out both your assetDate and warrantyEnd fields for Dell computers. Use the following query: update labtech.computers set warrantyend=null, assetdate=null where biosmfg like '%dell%' LabTech.psm1 Update-MissingWarranties.xml
  2. 12 points

    Version 1.0.4

    584 downloads

    The Internal Monitor "Notify When Agent is Online" watches machines with the "Notify When Online" computer EDF configured. It will send an alert as soon as it finds that the agent is offline. (The offline notice is skipped if the agent was already offline when notifications were first enabled.) When the agent comes online again another alert email will be sent and the EDF will be reset. This monitor can be used to notify when a lost computer comes online, or when that machine that is only online in the office every few weeks is back. To enable notifications for an agent, you simply put your email address into the "Notify When Online" EDF. You can enter multiple addresses separated by ";". The contents of the agent's "Comments" will be included in the email also. (Helpful to remember why you wanted to be alerted, or what instructions should be followed after receiving the alert.) When the agent returns online, the Network Inventory and System Info are refreshed. The recovery email will include the following details: The last check in was at @AgentCheckIn@. Public IP Detected: %RouterAddress% Internal IP: %LocalAddress% System Uptime: %uptime% Last Logged in User: %lastuser% This bundle includes a Script+EDF XML, and a SQL file with the Internal Monitor. To import the Script and EDF, select Tools -> Import -> XML Expansion. After import the script should appear in the "\Autofix Actions" folder. To import the Internal Monitor, select Tools -> Import -> SQL File. The monitor should be imported AFTER the script bundle has already been added. After importing, verify that a valid Alert Template is selected for the monitor. The Alert Template MUST have the "Run Script" action enabled without any script specified in the template. (The script is set on the monitor) Read the Script Notes for advanced control over the number of times a notification will be triggered.
  3. 11 points
    This thread is meant to be an overview for each individual Version 12+ patch. We know that all patches are not created equal and there are some patches that we should avoid like the plague (I'm looking at you 11.18/12.0). The top post will be consistently updated with known bugs/issues for each patch, so you should know at a glance if you feel comfortable deciding if you want to upgrade or hold out for the next patch. At the time of creating this - Patch 12.5 (12.0.5.327) has just been released, so the details of previous patches might not be as detailed as patches 12.5 and newer. Blue = Must install ASAP (unless skipping to a newer patch) Green = Stable Patch Orange = Relatively Stable Patch Red = Unstable Patch Official ConnectWise Automate Patch Notes for all released patches can be found here Patch 2020.4 (20.0.4.141) Benefits/Features: Web CC - Several script scheduling updates (Delete, edit, change priority of scheduled scripts) Known Bugs/Issues: Patch 2020.3 (20.0.3.112) Benefits/Features: Web CC - Scheduled scripts now have "high priority" when running. Maintenance mode can now be set Logging - Added some additional logging features. Control now prompts to give a reason for remoting into a machine Known Bugs/Issues: Patch 2020.2 (20.0.2.81) Benefits/Features: Web CC - Scripts can now be scheduled in the Web CC instead of only being able to run on demand Office 365 option added to the Dashboard ahead of the forced Azure security changes and "app registration." (see more info about it here) Control Center will no longer leave rogue windows open when the CC is closed. All Automate related windows will be closed when the CC itself is closed. Known Bugs/Issues: Patch 2020.1 (20.0.1.51) Benefits/Features: Mandatory MFA - Users Must Have an Email Address Defined (Email address is now a required field for all users) Automate has retired the legacy Report Manager and Crystal Reports as of December 31, 2019. Known Bugs/Issues:
  4. 11 points
    We, as a team, have noticed over the last couple of days that there has been some pretty heavy criticism of ConnectWise across forums, Slack, and social media outlets like Reddit in relation to the problems with the Automate binaries. A lot of the criticism has been relatively unfounded. The problems found within the product are not issues that we believe could have been picked up by any standard QA or testing measures - bearing in mind that our team member only stumbled on to this by complete accident. In regards to the criticism surrounding ConnectWise's notification to partners, it's much easier for the leaders in a community like ours to notify our members as we are not bound by the processes, procedures and multiple teams required to get a fix like this out successfully at a corporate level. Though there are clear areas for improvement in QA and testing processes, every member of the MSPGeek team was impressed at the speed of response and subsequent delivery of a fix by the Automate team. Our community was founded and based upon the idea of mutual assistance, open sharing, good communication and, for the past 6 years, has provided a trusted platform for users to support each other while helping ConnectWise make the product better. Let's continue to help ConnectWise by providing constructive criticism while helping them, and each other, through this bump in the road; it's the MSPGeek way.
  5. 10 points
    This monitor is designed to check the health of your DNS Server Forwarders. Copy "DNSForwarderCheck.vbs" to your LTShare\Transfer\Monitors\ folder. Import the SQL file in Control Center -> Tools -> Import -> SQL File. This will create a group monitor on the "Windows DNS Servers" group under "Service Plans\Windows Servers\Server Roles\Windows Servers Core Services". The monitor action will be "Default - Raise Alert". Here are a couple of examples of Servers with underperforming DNS Forwarders. 20180119 - Updated DNSForwarderCheck.vbs to support additional delimiters and corrected additional date handling issues for better International support 20180213 - Updated to perform all lookups in PARALLEL. Intermittent WAN issues should affect the lookups equally. Removed dependency on dnscmd.exe to learn Forwarder IP's, now works with foreign language installations of Windows Server. Once you identify issues, use a tool like GRC DNS Benchmark to identify what your best server choices are. https://www.grc.com/dns/benchmark.htm The attachment has been moved. See https://www.labtechgeek.com/files/file/21-dns-forwarder-check/
  6. 10 points
    Here's the latest version of this script that we've now used successfully on over 100 systems. This one is pushing build 1709 but you'll notice you need to edit it to provide the path to your server hosting the ISO. I was a little lazy this time and just exported one script with all of the other ones embedded in the export. When you import this, you'll get the following scripts: - Upgrade Win 10 (From Tray Icon) - ERG: Script designed to be added to your tray icon to let users start the install for themselves - Remove Duplicate Scripts in Queue: A function script I wrote to avoid having the same script queued multiple times for an agent - Upgrade Windows 10 - ERG: The main show. This is what you run manually if you want to schedule the script for an agent yourself. - BITS Download - ERG: Helper function script to handle downloading the ISO using BITS - BITS Status - ERGTEST: Ignore this. Not really sure why it's here - BITS Status - ERG: Helper function to the helper function script above. I know @DarrenWhite99 says you don't need to do it this way, but I had issues the 'official' way and this has been working for us. - FUNCTION - Email Results to Technician*: @DarrenWhite99's amazing function script for emailing whoever kicked off the script giving status updates I don't recall having any failure to reboot issues with this version. NOTE: I've removed any function for upgrading from prior versions of Windows (7 and 8). This is strictly for doing build-to-build upgrades of Windows 10 systems now. Upgrade Win10-ERG.zip
  7. 9 points

    Version 1.0.0

    658 downloads

    This role definition detects when Bitlocker is enabled on a machine. To import these Role Definitions, in the ConnectWise Automate main screen, go to Tools > Import then choose SQL File. Browse to the relevant file, and OK the message about inserting one row.
  8. 9 points
    I completely agree, which is precisely why there should be a proper structure in place for reporting security vulnerabilities. Losing vulnerabilities because a support ticket got closed because a partner didn't respond is serious amateur hour stuff. This is also the second time I know of it has happened (one of my privately reported ones got lost in the same way, mostly because the initial support engineer could not comprehend what I was trying to raise). I implore ConnectWise to put a proper procedure in place for reporting security vulnerabilities allowing for responsible disclosure. In the mean time at least train the existing staff to escalate anything like this immediately to the appropriate resource.
  9. 8 points
    I saw another post that reported that cloned systems with LT installed before imaging could continue to check in separately under the same ID. They implemented a solution using a script that would identify suspicious activity as any agent that had reported more than 2 different names within the past 7 days. I recognized a simple way that this could be detected in an "Agent ID Sharing" monitor, by testing the agent history for a name change, where the newly changed name was changed to the same name more than 2 times. Only a machine with more than 2 name changes to the same name would meet that criteria. For a false positive, this would require an agent "X" to be changed like: X->A->X->A->X->A. (Changed to "A" 3 times). The monitor is only looking over the span of 1 day, it is extremely unlikely that someone would generate that many name changes in such a short period. When 2 or more agents are checking in to the same id, the name can be changed dozens of times every day. I also created a "Duplicate Agent Detection" monitor that uses a weighted criteria to identify when a machine has now begun reporting in under a new ID. This is common when a machine has had the OS reinstalled, upgraded or had the LabTech agent reinstalled (or otherwise had a malfunction) causing it to get a new ID. The monitor looks for any 2 agents that have the same machine manufacturer, model, and serial number, and any 4 or more matches for the agents OS, OSVersion, BiosVersion, Domain, or TotalMemory. I have attached SQL files you can use to import these monitors. To monitor for "Agent ID Sharing", create an internal monitor with these settings: Updated 2017-08-29 - Improved to eliminate false positives from multiple computer renames, and resolves a compatibility issue with MariaDB Interval: Daily Monitor Mode: Send Fail after Success Table To Check: computers Field To Check: ComputerID Check Condition: Anything Result: Identity Field: (SELECT GROUP_CONCAT(DISTINCT UPPER(hc2.NewData) ORDER BY hc2.NewData SEPARATOR ',') FROM h_computers AS hc2 WHERE hc2.computerid=computers.computerid AND hc2.What='Name' AND hc2.When>DATE_ADD(NOW(),INTERVAL -1 DAY) GROUP BY hc2.computerid) Additional Condition: computers.computerid IN (SELECT DISTINCT hc.computerid FROM (SELECT hc1.computerid, COUNT(hc1.hisid) AS ChangedCount FROM h_computers AS hc1 WHERE hc1.what='Name' AND hc1.When>DATE_ADD(NOW(),INTERVAL -1 DAY) GROUP BY hc1.computerid,hc1.NewData) AS hc WHERE hc.ChangedCount>2) To monitor for "Duplicate Agent Detection", create an internal monitor with these settings: Interval: Daily Monitor Mode: Send Fail after Success Table To Check: computers Field To Check: ComputerID Check Condition: InSet Result: (SELECT DISTINCT C2.computerid FROM (`computers` AS c1 LEFT JOIN `computers` AS c2 USING (`ClientID`,`LocationID`,`Name`)) WHERE NOT ( c2.`ComputerID` IS NULL OR c1.`ComputerID`=c2.`ComputerID` OR C1.DateAdded>C2.LastContact OR C1.LastContact>C2.LastContact ) AND ((C1.`OS`=c2.`OS`)+(C1.`Version`=c2.`Version`)+(C1.`BiosFlash`=c2.`BiosFlash`)+(C1.`Domain`=c2.`Domain`)+(C1.`TotalMemory`=c2.`TotalMemory`)+10*((C1.`BiosName`=c2.`BiosName`)+(C1.`BiosVer`=c2.`BiosVer`)+(C1.`BiosMFG`=c2.`BiosMFG`)))>33) Identity Field: (SELECT GROUP_CONCAT(c2.computerid SEPARATOR ',') FROM computers AS C2 WHERE c2.computerid<>computers.computerid AND c2.clientid=computers.clientid AND c2.locationid=computers.locationid AND c2.Name=computers.Name AND (((C2.BiosName=computers.BiosName)+(C2.BiosVer=computers.BiosVer)+(C2.BiosMFG=computers.BiosMFG))*10+(C2.OS=computers.OS)+(C2.`Version`=computers.`Version`)+(C2.BiosFlash=computers.BiosFlash)+(C2.Domain=computers.Domain)+(C2.TotalMemory=computers.TotalMemory))>33) internal shared duplicate agent monitor same id Internal Monitors - Duplicate or Shared Agent Detection.zip
  10. 8 points

    Version 2.2.0

    524 downloads

    This monitor identifies the current agent version for OSX, Linux and Windows Agents. Every 5 minutes the monitor checks for agents that are below the current version and issues the Agent Update command to a small batch of online agents. It will only issue the command once per day for any particular agent, and only if there are no pending/executing commands already in the queue for the agent. It will dynamically calculate the concurrency limit so that it can process all of your agents in about 12 hours time. This spreads the load out while quickly bringing all agents up to date as soon as possible. Commands from before the agent was restarted are ignored, so it can update right away after a reboot even if it already failed to update within the past 24 hours. The monitor will only report verified update failures, so you can use this monitor to generate a ticket or to run your own update remediation script since you know the self-update has failed. Even though only a small number will be asked to update each time it runs, the monitor will report ALL online, out of date agents. You do not need to run a script or generate a ticket or do anything else, the monitor is issuing the update commands directly in SQL. If you don't generate a ticket, you should check the monitor periodically and see which agents are reported since they are failing to update and will need some sort of manual intervention. This file has been specially prepared to be safe for importing in Control Center through Tools -> Import SQL. The monitor will be named "CWA AGENT - Needs Update". If you have imported a previous version of this monitor, most of your customization's (Monitor Name, Alert Template, Ticket/Report Category, etc.) will be preserved but the necessary changes will be made automatically. Unlike prior versions of this monitor, you can safely "Build and View" and it will not continue to add more update commands.. Pending/In Process update commands will reduce the concurrency limit so that it never overloads the system with too many update commands at once. FAQ: My agent is outdated but the monitor doesn't show it. Why? These are the criteria for an update command to be issued. Until the monitor tries to update an agent, it will never report it as failing to update. If any of these conditions are not met, this is why you aren't seeing the agent update command: Is the agent version out of date? Is the agent online? (lastcontact within past 15 minutes) Are there no commands currently pending or executing? Have no update commands been issued within the past day for the current version? Have fewer than LIMIT (a custom value dynamically adjusted for your environment) update commands already been issued and have not completed? After answering YES to all of these checks, the monitor will issue the command to update the agent. It will only permit up to LIMIT update commands to be pending/executing at once, so if you have a large number of agents to update it might be awhile (up to 12 hours) before any particular agent is asked to update. Once an agent has been asked to update, the following criteria determines if the agent will be reported as failed: Has an update command been issued within the past day? Is the agent online? (lastcontact within past 15 minutes) Did the update command report failure? OR has the update command been executing/completed for over 2 hours? Is the agent version still out of date? After answering YES to all of these checks the monitor will report that the agent has failed to update. Why won't my agent update? This can be caused by many reasons. Some common ones: Insufficient Agent resources (low ram/disk space/available cpu) Another software install is in progress. The agent is pending a reboot. A file cannot properly extract from the update. (Check for ReadOnly attributes or invalid file/folder permissions for "%WINDIR%\Temp\_LTUpdate") A file is locked and cannot be replaced. (LTSvc.exe, LTTray.exe, etc. might fail to stop. A .DLL might be open or locked by AV or a third party program.) Nearly all of these are resolved with a reboot, so that is a good troubleshooting step after checking the file attributes/permissions. What alternative methods are available to update my agent? Look for this section to be expanded on in the future. LTPoSh has an Update-LTService function. Calling this function through Control is a highly effective way to resolve update failures for Windows agents. LTPoSh (or other solutions) can be used to reinstall the agent using Control, PSExec, or any other method as your disposal. (I have used ESET Remote Administrator to execute an install command for example). update outdated out of date updated current agent version automatic automated internal monitor rawsql
  11. 8 points
    Just to help the community. I have our system run a bitlocker get status script on all devices. I have made several EDF's for the device to store information: All of these are LOCKED so none of the techs can alter or adjust them. TPM Enabled. This runs the Powershell command: get-tpm | select -Expandproperty Autoprovisioning There is an IF statement where if the TPM is enabled it Marks the TPM EDF as enabled. This tells me if the device is able to be encrypted (we try to always have TPM) Checks if Bitlocker ProtectionStatus is on. Runs the Powershell command: Get-BitLockerVolume -MountPoint "C:" | Select -ExpandProperty ProtectionStatus There is an IF statement where if the Protectionstatus is ON it will check the 'BitlockerEnabled' box. The script will always run the 2 Powershell commands below regardless if bitlocker is enabled.. Bitlocker Recovery Key: Powershell command: manage-bde -protectors -get 😄 Get Bitlocker Status of C:. Powershell command: manage-bde -status 😄 The 'Date checked for Encryption' is a self diagnosing piece to tell me when the script was last ran. Example Contents of each EDF: Bitlocker Recovery Key: BitLocker Drive Encryption: Configuration Tool version 10.0.17134 Copyright (C) 2013 Microsoft Corporation. All rights reserved. Volume 😄 [] All Key Protectors TPM: ID: {E123456-E123-F123-F123-D123456789012} PCR Validation Profile: 0, 2, 4, 11 Numerical Password: ID: {1DDB4148-A123-B123-C123-B12345678901} Password: 123456-123456-123456-123456-123456-123456-123456-123456 Bitlocker Status of 😄 BitLocker Drive Encryption: Configuration Tool version 10.0.17134 Copyright (C) 2013 Microsoft Corporation. All rights reserved. Volume 😄 [] [OS Volume] Size: 475.49 GB BitLocker Version: 2.0 Conversion Status: Used Space Only Encrypted Percentage Encrypted: 100.0% Encryption Method: XTS-AES 128 Protection Status: Protection On Lock Status: Unlocked Identification Field: Unknown Key Protectors: TPM Numerical Password Attached is the SQL for importing the EDF files as well as the script used. Help this helps! Get Bitlocker Status of Device.xml Bitlocker-ComputerEDF.sql
  12. 8 points
    At some point in your career with LabTech you'll be forced to work directly with the database. Normally the go-to recommendation are Chris Taylor's docs at: http://labtechconsulting.com/LT-DB/index.html however that schema hadn't been updated since 2015 with many large and small changes over the years. Also comments had been truncated leaving many cut off mid sentence? I have two updated versions available. One is a downloadable chm file made in the same style as the official ConnectWise Manage data dictionary. I also have an online version that's nearly identical. I recommend the chm as it has a few notable advantages. The search in the chm file can do a full search across *everything* and . That includes the code for the trigger/view/procs, etc. Not only that but you can also pin to your favorites anything frequently used that persists. Both feature vector graphs to visually show relationships as well as the underlying code though! I also included all core LabTech plugins so it'd be the most complete reference doc. Everything is drillable so you can easily explore how different objects relate to each other. If you have a good writeup for any descriptions I can roll it into an updated release. Hopefully one day everything will be documented! View the code for functions, procedures, views Example viewing a trigger after clicking on one of our related dependencies. Pin frequently used Tables/Views to your favorites tab! I'll be updating them if anyone has any feedback or comments I can add in. Feel free to shoot me a message on here or on the LTG Slack "smeyer" Online: LabTech Data Dictionary Offline Docs: LabTech 2019.4.chm *New and improved* You may have to unblock the file after downloading! Older Releases: Online v12 P8 Data Dictionary
  13. 8 points
    EDIT: 07/01/2018 - New version available which fixes a minor bug with some variables not setting in certain conditions I spent last night putting this together which utilises the original Microsoft Powershell Script to generate a number of EDFs that indicate a machine's current status for these vulnerabilities. Here are the EDFs that generate: These are in an EDF tab called "Meltdown and Spectre Detection". The key EDF is "Is the machine secure" which will only tick when all other conditions are met and the machine is deemed secure. I have decided to put this script and the associated XML on Github. I have tested this on systems with Powershell 2 but as per the attached license in Github the software is provided AS IS without warranty of any kind. Test it fully before you roll it out. Upcoming planned improvements: 1) Better error handling 2) Internal monitor for detecting in-secure machines If anyone wants to submit pull requests to the PS1, I will merge them and at the same time update the ConnectWise Automate / LabTech Script. https://github.com/gavsto/ConnectWise-Automate-Meltdown-and-Spectre-Detection-Scripts Only the .XML is needed - this has the .PS1 included in the Github embedded in the actual Automate script - to repeat you do not need the .ps1 in the GitHub for this to get imported and detecting. Usage: 1) Import XML Script (twice). If you are still having problems following the second import, reload system cache. 2) Ensure EDFs have created by opening an Agent, going to EDFs, going to the Meltdown and Spectre Detection Section 3) Make sure EDFs are all there 4) Run the script (by default, this imports into Scripts > Meltdown and Specte Detection), against an agent, which will populate the EDFs But how do I actually fix this? 1) Install the January 2018 Security Updates that were released a few days ago. These will only install if your AV provider has added a specific reg key in to indicate it works with your current AV provider. 2) Install the latest BIOS/Firmware upgrade from your hardware provider (Dell released a batch last night) 3) Follow instructions here to add relevant registry keys to enable the mitigations: https://support.microsoft.com/en-hk/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution This requires a reboot after Step 3. Feedback and improvements welcome!
  14. 8 points
    This RAWSQL monitor uses data from the Active Directory Plugin to check for active AD Computer Accounts (with recent logon times) that do not have an Automate Agent installed. This is helpful to identify computers that were joined to the domain without the proper procedure, or computers that your agent deployment has not succeeded in reaching. The alert will be targeted against a probe computer for each client, you will need to read the alert body (or subject) to find out the name of the computer without an agent. Because the alerting is "Probe" centric, be aware of the following: If you enable group targeting, and your targeting excludes the probe computer for a client, no machines will be reported for that client. If you have no probe at any location for a client, no machines will be reported for that client. If you Exclude or Ignore the "computer", you are really excluding the Probe Computer, and if you do this no machines will be reported for that client. There is no simple way to Whitelist machines that do not have an agent installed on purpose. You will just have to live with them being reported. Thank You to @Gavsto for helping test this monitor and spot issues. The attachment has been moved. See https://www.labtechgeek.com/files/file/19-domain-computers-without-automate-agent/
  15. 7 points
    Please try to donate if you can. A lot of time went into this plugin - DOWNLOAD HERE Thank you to my donors: Mendy Green Gavin Stone Matt Navarette ITLogix, LLC Derek Leichty Kevin Bissinger DataServ Corporation NeoLore Networks Inc What this plugin does: This plugin will display your passwords that are on client and location levels in labtech and present them for sending in screenconnect sessions. Gone are the days of copy/pasting passwords. Requirements: Labtech 11+ ScreenConnect 6.1+ How to install: First, head over to the extensions section in ScreenConnect and install the RMM+ passwords extension. ScreenConnect 6.1 or greater is required Second, install the attached Labtech plugin. RMM+ Password Link Third, go and configure some timeout settings. A. Token Valid For How Many Minutes: This setting is an absolute token timeout in minutes. set to -1 to make tokens last forever. B. Token Idle Expire Minutes: This setting is an idle timeout so if the plugin isn't used or a password isn't queried or passed for this many minute it will invalidate the token. C. Only Allow One Token Per User: This setting will allow users to only have passwords on one computer(checked) or any number of computers(unchecked) Fourth, set permissions for the plugin. Fith, Activate by doing the following in Screenconnect Fill in your Labtech server url(please only use SSL or you will be sending passwords in clear over an unencrypted connection),Labtech username and Labtech password. Decide what items you want to show and login. As of 1.0.22 the automate URL is no longer in the passwords helper. Please set by going to Options->Edit Settings in the control extension. Sixth, Enjoy saving time!!! Changelog: ScreenConnect/Control Plugin 1.0.5 - Initial Release 1.0.6 - Graphical Updates 1.0.7 - Added ability on login form to save username. Fixed issue where command section would lose access key used to send commands. 1.0.8 - Fixed bug that wouldn't allow brackets in script names, password names or password values. 1.0.9 - Fixed a bug that was introduced in the regex in 1.0.8 that would not allow it to detect empty results. Added ability to type "wake" into the command line and send the ScreenConnect version of wake to computer. 1.0.10 - Corrected issue with wake command not being recognized if case was not all lower. 1.0.11 - Corrected issue to properly escape strings sent in commands. 1.0.12 - Updated command to not wait full timeout if data returns quicker. Also prepends !#timeout to the command for SC. 1.0.13 - Updated script list to contain actual folder structure as appears in LT. Added a password filter box. Now allows to show/hide sections on demand and includes a client/location picker that allows you to change locations or clients so if you have generic passwords stored on your internal client location for instance. Added ability to hide passwords per user to get rid of the extra clutter that isn't needed. 1.0.14 - Various CSS fixes(thanks Andrea). Added filter box to auto highlight top match in password list and enter key will send highlighted match. 1.0.20 - Added ability to send carriage return after password. 1.0.21 - Updated methods for Control version 19 compatibility 1.0.22 - Added optional dark theme to match Control version 19. Automate URL is no longer in the passwords helper. Please set by going to Options->Edit Settings in the control extension. 1.0.24 - Moved Automate URL back to login form because Control now ties the helper to the signature file. 1.0.25 - Fixed broken commands from 1.0.22. 1.0.27 - Added code to allow MFA for login. Uses same MFA code as /automate login. 1.0.28 - Added checkbox to always show MFA box so that login can be single step instead of 2 step. 1.0.29 - Changed the way you can execute commands to require a shared key between Automate and Control. This must be set in the plugin settings to use commands in the helper. Labtech Plugin 1.0.0.1 - Initial Release 1.0.0.2 - Added permission requirement to Read Passwords for client. If a userclass doesn't have the Read Passwords permission it will not let them go to the SC plugin. 1.0.0.3 - Added the ability to show username instead of display name as password identifier. 1.0.0.5 - Corrected issue with incorrectly determining permissions(Thanks Eric Besserer for the help). Added ability to block/allow script scheduling per location. 1.0.0.7 - Fixed issue where login token was set for more than 24 hours would invalidate each night. Various other bugfixes. 1.0.0.8 - Added regex include and exclude options for selectively showing passwords. 1.0.0.9 - Added auditing of passwords sent, passwords copied and scripts sent to the Dashboard Audit section. 1.0.0.11 - Added ability to regex include or exclude password from view globally. Security update to only show clients in client selector that someone has access to. 1.0.0.13 - Fixes to regex include/exclude and superadmin permissions. 1.0.0.14 - Added ability to send carriage return after password. 1.0.0.16 - Corrected permission issue that was only looking at client level permissions and not computer permissions therefor allowing group permissions to work correctly. 1.0.0.17 - Corrected permission issue that was only looking at client level permissions and not computer permissions therefor allowing group permissions to work correctly. 1.0.0.18 - Added security feature for IP filtering. The IP address that is captured on requests is whatever IP the HOST client of ConnectWise Control is running on. 1.0.0.20 - Added MFA login using the same code that you use for /automate. REQUIRES Control plugin 1.0.27 or higher. 1.0.0.21 - Added shared key that must match the control plugin settings key in order to execute commands from helper. Please try to donate if you can. A lot of time went into this plugin - DOWNLOAD HERE Thank you to my donors: Mendy Green Gavin Stone Matt Navarette ITLogix, LLC Derek Leichty Kevin Bissinger DataServ Corporation NeoLore Networks Inc EDIT: as of the labtech version 1.0.0.5 permissions need to be enabled on a client basis for users to have access to passwords and scripts.
  16. 7 points
    Hey everyone! My name is Ian, I work on the Product team at Connectwise. I'm (obviously) not active on the forums, but you can find me in slack under igriesdorn if you'd like to chat. Wanted to talk about KI 11109583, which is the /Automate web throwing an error when a limited user clicks on the Scripts button. We identified an issue where if a partner has rows in their lt_scripts table that have blank values for Permissions, and EditPermissions, it is what is causing this error for the non super admin accounts. Running the following query will identify any scripts that are both 1) Blank, and/or 2) Not properly terminated: SELECT * FROM lt_scripts WHERE Permission = '' OR Permission NOT LIKE '%,%' Partners can then go to those scripts in the product, navigate to the Permissions tab, and set user class permissions to limit as they wish, then save the script. If they add a user class to a blank window, and remove it, it will add in "0," to the two columns in the database. While manual database manipulation is not recommended by me, if you want, you can update the values manually to 0, We have a 20 count of partners on this KI right now, and it's mostly MSPGeek (had to delete and correct that name 😅) on the list, so I thought it would be good to reach out to you all on the forums and through Slack to see if we can get the information propagated out in the community. We will be putting in a fix in the product that will prevent your /Automate web page's script button from throwing the error. Thanks, Ian Griesdorn EDIT: I forgot to follow up with this, I apologize. This fix has passed through the QA process, and is slated for release in 2019 Patch 4. I appreciate the positive feedback, and look forward to working with you all through Slack and on here in the future!
  17. 7 points
    This monitor identifies the current agent version for OSX, Linux and Windows Agents. Each hour the monitor checks for agents that are not at the current version, and issues the Agent Update command to a small batch of online agents. It will only issue the command once per day for any particular agent, and only if there are no pending/executing update commands already in the queue. This has been moved to the File Download section. You can download it here: https://www.mspgeek.com/files/file/45-cwa-agent-version-update-monitor/
  18. 6 points
    Following the MSPs who were impacted by this https://www.reddit.com/r/msp/comments/ani14t/local_msp_got_hacked_and_all_clients_cryptolocked/ a number MSPGeekers had an impromptu call to discuss security in general and what best practices we all followed to ensure our systems are as secured as possible. This prompted an idea from @MetaMSP that we have a place where best practices can be defined - things that we can put in place to make our RMMs as secure as possible. I will update this with a list of generally agreed upon methods based on the discussion. How can I apply better security? 1) Enable Multi-Factor Authentication. This is a functionality that already exists within Automate in the form of plugins, and for the effort to implement it gives a massive boost to security. As an MSP every single account you have should have 2FA on it 2) Do not publish your Automate URL publicly - anywhere. If you are linking to your Automate site, or even your Control site from anywhere on your website - remove it and ensure to the best of your ability it is removed from search engine indexes. Attackers can find servers like this on Google using very simple methods and you will be one of the first they attempt to attack. 3) Review all plugins/extensions installed and disable, remove the ones you no longer use. Having an added benefit of speeding your system up, each of these adds a small risk profile as you are relying on third party code being secure running in the background. Removing plugins you no longer use or need reduces the surface area of attack. 4) Review ports you have open and close ports that are not needed. You will find the ConnectWise documentation here on what ports should be open. https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/020/010/020 . Don't just assume this is right - check. Ports like 3306 (MySQL DB Port) and 12413 (File Redirector Service) should absolutely not be opened up externally. 5) Keep your Automate up to date. ConnectWise are constantly fixing security issues that are reported to them. You may think you are safe on that "old" version of Automate/LabTech, but in reality you are sitting on an out-of-date piece of software that is ripe for someone attacking 6) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP 7) DO ensure that robots.txt is properly set on your Automate server. If you can Google your Automate server's hostname and get a result, this is BROKEN and should be fixed ASAP. <-- Courtesy of @MetaMSP 8 ) Firewall Blocking. I personally block every country other than the UK and the USA from my Automate Server on our external firewall. This greatly reduces your chance of being attacked out of places like China, Korea, Russia etc. 9) Frequently review the following at your MSP Check that the username and passwords set are secure, better yet randomise them all and use a password manager Treat vendors/services that don't support or allow 2FA with extreme prejudice. I will happily drop vendors/services that don't support this. If you 100% still need to keep them, setup a periodic review and pressure them secure their systems because you can almost guarantee if they are not doing customer logins properly that there will be other issues Setup a periodic review to audit users that are active on all systems. PSA, Office365, RMM, Documentation Systems (ITGlue/IT Boost) Audit 3rd Party Access, Consultants and Vendor access to your systems <-- Thanks @SteveIT 10) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP
  19. 6 points

    Version 1.1.0

    105 downloads

    Once a role has been detected for an agent, it will remain in the list of roles for that system even if the detection rule no longer applies. There are no timestamps recorded for role changes so it is impossible to know if the non-detection state is short term or permanent. This Internal Monitor named "Expire RoleDections Not Detected For 7 Days*" will identify inactive roles on an agent, which creates a separate active alert for each role on the agent with a timestamp for when the role was first found missing. The RAWSQL monitor is three queries in one. The first one checks for any role that was reported missing more than 7 days ago, and deletes the role from the agent (based on the alert timestamp). The second query deletes role alerts from the history if the role is found to be active, or no longer exists on that agent. The last query is what actually detects missing roles to generate alerts. With the expired roles and alerts removed from the agent by the first queries, the active alert in the monitor will clear (heal) for that role also. The role must be continuously non-detected.. If it is ever found to be a detected role before 7 days has passed, the alert will clear (query #2) and the monitor will start the clock again the if the role becomes missing again. Manually assigned "Apply" and "Ignore" Roles are preserved, only automatically detected roles are candidates for cleanup. If you want your roles to clear quicker, change the date adjustment in the first query from "-7 DAY" to whatever interval you believe is appropriate. This monitor has been updated/improved since it was first released. The attached SQL should safely update any existing version of this monitor and it is recommended that you update even if you have this monitor in place and working as this specific configuration may not have ever been published before.
  20. 6 points
    The "SW - BlackListed Install" and "SW - Unclassified Apps" monitors do not support wildcard matching, so whitelisted or blacklisted application entries with wildcards are not used in these monitor results. This causes false positives, extra ticket noise, etc. These monitors can be adjusted to support wildcards. For the "SW - Unclassified Apps" monitor, the default "Result" field value is: (Select Name from Applicationblacklist union select name from applicationwhitelist) To enable Wildcard matching, replace the Result value with the following: (SELECT DISTINCT `Name` FROM (SELECT Software.Name FROM Software JOIN (SELECT REPLACE(`Name`,'*','%') AS `Name` FROM ApplicationBlacklist WHERE INSTR(REPLACE(`Name`,'*','%'),'%')>0 UNION SELECT REPLACE(`Name`,'*','%') FROM ApplicationWhitelist WHERE INSTR(REPLACE(`Name`,'*','%'),'%')>0) AS AppMatches ON Software.`Name` LIKE AppMatches.`Name` UNION SELECT Software.Name FROM Software JOIN (SELECT `Name` FROM ApplicationBlacklist WHERE INSTR(`Name`,'%')=0 UNION SELECT `Name` FROM ApplicationWhitelist WHERE INSTR(`Name`,'%')=0) AS AppList ON Software.Name = AppList.Name) AS Applications) For the "SW - BlackListed Install" monitor, the default "Result" field value is: (Select Name from Applicationblacklist) To enable Wildcard matching, replace the Result value with the following: (SELECT DISTINCT `Name` FROM (SELECT Software.Name FROM Software JOIN (SELECT REPLACE(`Name`,'*','%') AS `Name` FROM ApplicationBlacklist WHERE INSTR(REPLACE(`Name`,'*','%'),'%')>0) AS AppMatches ON Software.`Name` LIKE AppMatches.`Name` UNION SELECT Software.Name FROM Software JOIN (SELECT `Name` FROM ApplicationBlacklist WHERE INSTR(`Name`,'%')=0) AS AppList ON Software.Name = AppList.Name) AS Applications) I hope this helps! It has been reported that some browsers are not copying the text correctly. Chrome is believed to work, Firefox may be suspect. If you have any errors try copying the text using a different browser.
  21. 6 points

    Version 3.2.2

    272 downloads

    This solution will export customizations into a folder hierarchy based on each type of backup. It uses only Automate scripting functions so it is compatible with both Cloud Hosted and On-Prem servers. It is compatible with MySQL 5.6+ and Automate Version 11+. Script Backups will be placed in folders matching the script folders in your environment. Each time a script is exported, the last updated time and user information is included, providing multiple script revisions as it is changed over time. This script does not decode the scriptdata, so script dependencies like EDF's or other scripts will not be bundled in the XML export. But if you are just looking to undo a change, the script dependencies should exist already. Scriptlets will not be "versioned", but it will detect when they have changed and will only back up new or changed Scriptlets. Additionally, the following item types will also be backed up: Internal Monitors, Group Monitors, Remote Monitors, Dataviews, Role Detections, ExtraData Fields, and VirusScanners. The backups will be created at the folder identified by "@BackupRoot@", which you can provide as a script parameter when scheduling if you do not want to use the default path. Target the script against an online agent, and the script data will be backed up to that computer. Future runs will reference the saved "Script State" variable for that agent and will only include the scripts updated since the last successful backup. Backup verification is performed, if a script backup file was not created as expected, the backup timestamp will not be changed allowing the backup to be attempted again. The attached .zip bundle contains scripts actually backed up by this solution. Import the "Send Email" script first, and then import the "Backup" script. If there are any problems or you would rather import a script exported by Automate, the "Backup Automate Scripts (and More).xml" is included as well. You do not need to import all three files! Just schedule this to run daily against any agent to establish your script archive. script version revision archive backup
  22. 6 points
    I made this over the weekend. For a while now I have been wanting to pull true uptime statistics into Automate, IE, presented as a percentage how much uptime did the server have this month. To do this a piece of embedded Powershell is running in an Automate Script that populates EDFs with this information in. There are numerous decent data points here that can potentially have monitors running against them: https://github.com/gavsto/Connectwise-Automate-Public-Scripts Some ideas for usage: 1) Trigger when more than x crashes are detected in last 30 day period 2) Include up-time percentage in your reports 3) Trigger when more than x reboots are detected in last 30 day period 4) Show value to customers who have required SLAs for server uptime Hope you all find it useful.
  23. 6 points
    I found that Duong's script worked great when it was updating Dell SupportAssist 3.x, but when the computers had Dell SupportAssist 1.x or 2.x installed it would simply install the new version side by side. I updated it to include some uninstallation logic - first running through all the known uninstallers and then if that fails doing a manual removal of the old software. Then it goes into Duong's installation script to push the new version. This has fixed 99% of the problem computers I had automatically. Dell - May 2019 - Dell Support Assist Vulnerability - Updated.xml (Script called "Dell - May 2019 - Dell Support Assist Vulnerability" found in the "Share" folder if you leave it default)
  24. 6 points
    See the recording for this cast, start at around 12 minutes in order to skip the countdown until 8:30 https://www.youtube.com/watch?v=Lv5ZVAJIFRk&t=1s There was some confusion as I quickly ballooned from simple Searches and Groups to a Remote Monitor that changed into a complex Powershell script that then turned into a confusing State Based monitor. I'll break it down here for people who want a quick review without watching the entire show. The goal: Determine which Virtual Machines are supposed to be running on the Host that are not actually running right now, and start them or make a ticket if they fail to start. To identify these virtual machines I have to make sure I find machines that are either A) Not a replica B) If it is a replica, its the PRIMARY Replica. Additionally I have to confirm that the machine is supposed to be running. I use two methods to do this 1) I confirm that the VHD(x) was recently written to, and also confirm that the VM Settings itself is configured to start automatically. I use the following Powershell Modules to determine what I listed above.. Get-VM Get-VMReplication Get-VMHardDiskDrive Get-ChildItem I use the following PowerShell parsing methods and logical processing to properly handle the data received from the above modules. where For-Each If () { } else {} My first step is to get a list of all Virtual Machines that are NOT in a running state. Specifically because there are other states (Starting, Shutting down) I limit my search to those that are either Saved or Off. Get-VM | where {$_.State -in "Off","Saved"} I'm using the "-in" logical comparison to match the value of the VM State retrieved from the Get-VM command to either Off or Saved. This will return any VM that isn't running. This is the base check. Next we want to check for machines that are meant to be on. This is just an additional condition of looking for Virtual Machines that are configured to start automatically. Get-VM | where {$_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing"} Using "-and" and "-ne" (not equals) I'm able to string a second check on the same VM object to exclude anything that is configured to DO NOTHING on reboot. My next step is to get the VM Replication status, using the powershell module Get-VMReplication I will be returned with a list of all Virtual Machines that are replicas.This is similar to the Get-VM module except that I'm specifically getting ONLY Replicas. Using similar logic to above I know that I only want to get Virtual Machine replicas where the Primary Server is the same as the server that the name itself is running under. Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)} If you look at Get-VMReplication you'll see a property "PrimaryServer" that has the full FQDN of the servername. The only way to get the FQDN in Powershell without using WMI is by tapping into .NET (which is faster) and that is the second part of the command. Combining addressing the DNS Namespace in .NET to retrieve the FQDN we combine it with the environment variable computername to generate the same string that would match within the PrimaryServer attribute. We now have a full list of VMs that exist on the VM as the primary replica copy. Combining the first two modules together so we can get a single list of Virtual Machines the command will look like this. Get-VM | where {($_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing") -and ($_.Name -in (Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName)} Note how we created separate groups in the logic, because two statements are AND but the second statement (the replica primary server) is going to be an OR against a final check -if Replica is even enabled at all. The final piece to point out is how I'm specifically selecting a specific property from the output by doing ".VMName" which is the value I want to compare against the $_.Name from Get-VM. Adding in the final and last condition check to confirm that we're getting NON REPLICA Vm's as well as Replica VMs where the primary server is itself I'm going to adjust the code as follows. Get-VM | where {($_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing") -and ($_.Name -in (Get-VMReplication|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName -or $_.ReplicationState -eq "Disabled")} We now have a list of VM's that are Off/Not Running, Configured to turn on automatically, will run on this host if it was on however we still don't know if the VM was recently used. Using Get-ChildItem and Get-VMHardDiskDrive I can pull out the path to the first VHD on the VM (which will always be the boot disk) and check the last write time. However note, that this can take some time and I only want to do this for the Virtual Machines that we know are supposed to be on. This means we need to create an If statement to see IF results are returned then we'll check for the Last Write TIme. #Create variable containing all the Virtual Machines. $VirtualMachines = Get-VM | where {($_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing") -and ($_.Name -in (Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName)} #Create If Statement and check if the variable is NULL if ($null -ne $VirtualMachines) { #Loop through the virtual machines and check last write time $VirtualMachines|foreach { #create the actual check for the last write time, follow along as we nest additional modules within the IF Check if ( (Get-ChildItem (Get-VMHardDiskDrive -VMName $_.Name -ControllerLocation 0).Path).LastWriteTime -gt (Get-Date).AddDays(-2) ) { #Start VM Start-VM $_ #Output Hostnames or VM Object $_ } #end If statement for last write time } #end Loop statement } # End If statement for Null variable, create else statement that no issues found. else {Write-Host "No issues detected."} The above script has been written in long hand with lots of comments to indicate what the script is doing. As you can see we're still sticking to the basics explained in previous examples of pulling information as we just overlay commands over with themselves to do more complex comparisons and matching. The above script will do everything we need, you can remove the Start-VM line and just have it echo out the results or you can remove the $_ line by itself to remove the output from occurring in the event the machines start successfully. Keep in mind about the Escape Characters, when using the above script in Automate Remote Monitor you will need it to be one line, and will need to call it in by executing the poewrshell interpreter directly. The full command to use in the remote monitor is as follows. %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe -noprofile -command " & {$a = get-vm|? {($_.State -in \"Saved\",\"Off\" -and $_.AutomaticStartAction -ne \"Nothing\") -and ($_.ReplicationState -eq \"Disabled\" -or ($_.Name -in (Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName))}; if ($a){$a|% {if ((gci (Get-VMHardDiskDrive -VMName $_.Name -ControllerLocation 0).Path).LastWriteTime -gt (Get-Date).AddDays(-2)){Start-VM $_} }}else {Write-Host \"No issues detected.\"} }" Using ";" to indicate line breaks to allow multi line scripts to be executed on a single line. Regarding the remote monitor itself you can set it to be State Based by following the below screenshot. The details of the state based conditions are covered more inside the video, and will not be covered in detail here. This post was just to cover the areas that I "rushed" through so that I could focus on Automate. Please hit me up if you have any questions.
  25. 6 points
    Right click LTSvcMon.exe and go to properties GO to certificates and view the properties of sha1, if its not trusted follow these steps: https://secure.globalsign.net/cacert/Root-R1.crt Install this cert in your computer store Don't let windows choose a spot Manually set it to Trusted Root Certification Authorities. This fixed our issue. OR Set this key to 0: HKLM\Software\Policy\Microsoft\SystemCertificates\AuthRoot\DisableRootAutoUpdate Reinstall automate agent
  26. 6 points
    This post is largely just being moved from another thread into its own topic, BUT I have updated the monitor so that it is much simpler to implement. This is how I test for DHCP Servers: I use the DHCP Test Utility from https://blog.thecybershadow.net/2013/01/10/dhcp-test-client/ along with a VBScript I wrote to manage the EXE. (By default the tool will use a random MAC, and each query will consume another DHCP Lease) This tool is testing DHCP Client operations, so it should NOT be run on the DHCP server. It should be run from agents that are in a position to act like DHCP clients. (Even if they have a static IP, it's OK.) To install: Extract the ZIP files. Put the dhcptest.exe and dhcptest-wrapper.vbs into your LTShare\Transfer\Monitors folder. You can pull your own copy of dhcptest.exe from https://github.com/CyberShadow/dhcptest if desired. I have not had any problems with only using the 32 bit binary on 64 bit OS versions, but YMMV. Edit "EXE - DHCPEnvironmentCheck.sql" and replace the current agent id (1183) with a valid agent ID for the monitor. Save and then import in Control Center -> Tools -> Import -> SQL File. This monitor runs every 15 minutes. The VBS script will be automatically transferred to the agent by Automate. When it runs, it will check for dhcptest.exe and automatically download it from your server. Once the tool is in place, the script will perform a DHCP query on each active network interface and return the number of offers and DHCP Server IP's that responded. The Monitor Result should match ".*;DHCPServersActive=1;.*", which should be set to the number of valid DHCP Servers. I find this effective for the group monitor, and when I need to tailor it I can just check "Override Settings" and change the result expected to match the environment. You could easily add the IP of the DHCP server as part of the match condition, so that not only must the number of servers be correct, but the server IP must match. (Probably not important). So a result with DHCPServersActive=0 is telling you that you failed to get any offers. This is clearly bad (unless there should be NO DHCP servers). Active=1 means you got an offer (Typical State). Active=2 is saying that you got more than 1 offer. This is clearly bad If only 1 server is authorized. There are some DHCP error conditions that will not be caught: All responses are specific to this agent, what you see may not match other agents because of: 1. DHCP Server Out of Leases - If it has an offer for you, it will treat it like a lease even if you don't respond. Additional queries will continue to return the IP the server has reserved for you, but the server would ignore other requests if there are no leases available. 2: DHCP Request Filtering errors - If the switch is configured for DHCP Snooping and is blocking other devices, again success for you only is getting confirmation that YOU can get a lease offer. 3. Malfunctioning DHCP Clusters - An issue I have seen that is a blend of #1 and #2. If there are two DHCP Servers in a cluster, your initial request MIGHT prompt a response from both, but once they reconcile and decide which server should respond to you the other server will ignore you so you won't see two responses. But if YOU are getting an offer from an operating server, if the other DHCP cluster host is broken it may be ignoring other client requests. So this has limited utility for testing that your DHCP server is working. But it is perfect for testing if unauthorized DHCP servers are running, or if your DHCP server is failing to offer any leases (even if the service is running, etc.) DHCPMonitorCheck.zip
  27. 6 points
    From a Corporate Level, I have begun working on a resolution to this concern.
  28. 6 points

    Version 1.0.4

    361 downloads

    I used the stock PowerShell 2.0 update script, and modified it for PowerShell 3, 4, and 5. I attempted to cover pre-requisites checks to prevent installing on systems with incompatible applications or operating systems. The scripts will check if the KB is reported as installed after the update completes to determine status. This means that the script will report the update was successful even if a reboot is still needed. The attached .ZIP has 3 scripts, one for each version, in a single XML bundle. To import the Scripts, select Tools -> Import -> XML Expansion. After import the scripts should appear in the "__Examples" folder. This pairs nicely with the PowerShell Version Roles at https://www.labtechgeek.com/files/file/13-powershell-version-roles/
  29. 6 points
    Do you like seeing a 3TB volume with 250GB free displaying a red bar? And then explaining to your clients that their server is not running out of space and they did not need to open an emergency ticket afterhours to tell you they thought it was? User feedback is important, but the red bar can make the client think that you aren't doing your job. So get rid of it! Here are two PowerShell command lines tested against POSH2, that should work for Windows 7/2008R2 - Windows 10/Server2016. REM #Removes the Free Space Bar from Drives in Explorer. Takes effect after next login. "%WINDIR%\System32\WindowsPowerShell\v1.0\powershell.exe" "$RegKey='HKLM:\Software\Classes\Drive';If (-not (Test-Path -Path $RegKey)) {$Null=New-Item -Path $RegKey -Force -EA 0};@('TileInfo','PreviewDetails')|ForEach-Object {$RegVal=(Get-ItemProperty $RegKey -Name $_ -EA 0|Select-Object -Expand $_) -replace 'System\.PercentFull;','';If ($RegVal) {Set-ItemProperty -Path $RegKey -Name $_ -Value $RegVal -EA 0}}" REM #Adds the Free Space Bar to Drives in Explorer. Takes effect after next login. "%WINDIR%\System32\WindowsPowerShell\v1.0\powershell.exe" "$RegKey='HKLM:\Software\Classes\Drive';If (-not (Test-Path -Path $RegKey)) {$Null=New-Item -Path $RegKey -Force -EA 0};@('TileInfo','PreviewDetails')|ForEach-Object {$RegVal=(Get-ItemProperty $RegKey -Name $_ -EA 0|Select-Object -Expand $_) -replace '(?<=prop:\*)(?!.*System\.PercentFull;)','System.PercentFull;';If ($RegVal) {Set-ItemProperty -Path $RegKey -Name $_ -Value $RegVal -EA 0}}" Either is safe to run repeatedly, they will only perform the change if needed. You could put these into scripts and run against system as needed. Or you could make a dropdown EDF with values like "Ignore Free Space Bars", "Remove Free Space Bars", "Add Free Space Bars" and create searches to group machines with the second or third choice selected. Then Add the "Add" and "Remove" versions of the command as remote monitors that run a couple of times a day in "Data Collection" mode (no alerts generated) and you can toggle the setting on or off just by changing that EDF value.
  30. 6 points

    Version 1.0.1

    274 downloads

    This bundle will add the following Role Definition: PowerShell And the following Sub-Definitions: PowerShell 1 PowerShell 2 PowerShell 3 PowerShell 4 PowerShell 5 PowerShell 6 To import these Role Definitions, in the ConnectWise Automate main screen, go to Tools > Import -> SQL File. Browse to the relevant file, and OK the message about inserting 7 rows.
  31. 5 points
    This was a discussion in Slack that I thought would be useful to have in a post with some light editing. The questions/comments: I'm sure this is an old topic for this forum, but it's fresh on my brain after we just had to RDP into many servers that Automate said were "offline" after not shutting down and coming back up cleanly, but they were actually running.. The agent was just not checking in or didn't restart correctly. Is there a best practice plugin/tool/method to ensure the agents come up clean and healthy? Along the same lines, is there a best practice around "healing" the agent, if needed? Sometimes we find LTTray not running. We have one particular client with 2 offices (one is new) and the PCs always dump LTService. We cant find out the issue, but they always just stop .. Response: There are issues sometimes (possibly network related when fastboot is used) where the service will take too long to start and timeout. There is a KB article that covers increasing the service startup timeout. (https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Knowledge_Base_Articles/Agent_-_Windows%3A_Increase_the_time_limit_for_LTService_to_run_from_registry) That may be all that you need for the case of "Machine restarts but the agent never comes back online" Regarding healing/startup issues. There are a few things that you can do. Use an agent side scheduled task for health checks. I do this because I use scheduled tasks to install my agents. I use a GPO to place a basic script on the agent, and create a scheduled task that runs that script about once an hour. The script tests if the service is installed and running. If not, it tries to start it. If it cannot start, OR if the service is not installed, it proceeds to remove the agent and reinstall. (Or installs the agent for the first time) BUT, the simple script doesn't test that the agent is WORKING, only that it is RUNNING. A script appropriate for this: https://slack-files.com/T0SD04DSM-F8RA68F53-da5f31ba6a SIDENOTE - This is the best possible deployment method when you have a domain because it works in almost any possible scenario, as long as AD/GPO functionality is working. It works much better than any type of push install, with only the tradeoff of not being "instant" since you have to wait for the GPO to be picked up and for the scheduled task to run. But you can get high install rates in a matter of hours, including installs to remote VPN users, multiple branches, etc. And you are automatically covered for any new machines joining the domain, etc. So, if you follow my suggestion and use a repeating scheduled task to deploy the agent, you can get a certain amount of agent healing built into that process.... You can set a property to run a command when the agent goes offline. (lt_offlineshell and lt_offlinetime, See https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/060/020/020#Agent_Template_Property_Descriptions) Specifically when the agent notices that it cannot communicate with the server, it can run a script/command locally. So you could script some tricky stuff to see if the agent is being stupid and if so force it to be restarted... But this only helps in certain situations because you are counting on the Agent initiating the repair. It's not bad, but you will still get agents that have checked out and are not working right. You can build your own integration with Control to have another way to stop/start the services remotely. This is the most popular way, and works awesome. You use the agent offline monitor to spot agents that are not checking in. You trigger a remediation/autofix script. The script checks if the agent's Control session is running. If it IS, then you use the Control session to send a command line that is something like this: #timeout=90000 "%WINDIR%\System32\cmd.exe" /C "net stop ltservice&sc stop ltsvcmon&taskkill /im lttray.exe /im labtechupdate.exe /im labvnc.exe /f /t&ping -n 5 127.0.0.1>NUL&taskkill /im ltsvc.exe /im ltsvcmon.exe /f /t&ping -n 5 127.0.0.1>NUL&net start ltservice&sc start ltsvcmon" There are two popular ways to do this. The RMM+ScreenConnect plugin for Control by Tim Steffens/bigdessert (See https://www.labtechgeek.com/topic/2978-rmm-screenconnect-plugin-with-labtech-integration/). Combined with his HTTP Get/Post Script function plugin (available at the above post), you can do everything with just the script engine functionality. You are installing a custom plugin into Control and Automate, but you gain a simple to use API to work with Control. (The Control Webhook API wasn't documented, so Tim created his own!) Chris Taylor's Control PowerShell module (See https://github.com/LabtechConsulting/ConnectWiseControlPowerShell/) doesn't require any changes to Control or Automate because it works with the undocumented Control API. The only "drawback" with the PowerShell method is that you have to have a machine to run the PowerShell commands on. If your Automate server is on-premise you can use it to run the commands. If you are cloud hosted that is a slight issue, but you just need to designate some other server/agent to run the commands. Both options provide enough functionality to test if the agent is online, run a command on the agent, and get some output back. So in addition to agent healing you might find other useful ways to leverage Control. agent agents heal health status fail start startup timeout offline no not responsive responding checkin check-in check checking in
  32. 5 points
    Hey all, I've been active in the community for a few years now but have never really posted in the forums. I've put together a script/remote monitor to address the latest RDP vulnerability from Microsoft and figured I've learned enough from the MSPGeek community it can't hurt to give some back. This first link is a SQL inject that will create a remote monitor on your "Service Plans\Windows Servers\Managed 24x7" and "Service Plans\Windows Workstations\Managed 8x5" groups. What groups it installs the monitor on are just defined on the inject with the GroupID so if you just look at the inject it's easy to change that GroupID to whatever you want before you run it. !!!WARNING!!!! - You're running a SQL inject on your DB...this can be dangerous, proceed at your own risk. Read through the inject, make sure you're comfortable with what it's doing. This monitor is also live pulling a powershell script from MY github. This means if I decided to have a bad day and change the powershell script in my github to something malicious then I could effectively run my malicious code on ALL of your machines. I'm not malicious, but ya know...be smart, be safe! Feel free to host the powershell script at your own location and just swap the URL on the monitor. Lastly, I've tested this on several machines in my environment, but that doesn't mean there can't be an issue I haven't ran into yet. If you find a problem, let me know so I can fix it! Download Links SQL Inject: https://github.com/dkbrookie/Automate-Public/blob/master/CVE/CVE-2019-1182/SQL/CVE-2019-1182_Remediation.sql Powershell: https://github.com/dkbrookie/Automate-Public/blob/master/CVE/CVE-2019-1182/Powershell/CVE-2019-1182.ps1 Script breakdown... This script is outputting either !ERROR:, !WARNING:, or !SUCCESS: with details on the state of the install process. If you set the monitor alert template to create a ticket (I have it set to Default - Do Nothing so just change to what you want) it will output the Powershell results right into the ticket. The keywords from the script output above are to use in a state based remote monitor in Automate so this will go through what that looks like briefly. The script checks the OS of the machine and figures out the correct KB number it needs to have installed to patch this vulnerability. Once it finds the right KB, it checks to see if the KB is installed or not. If it's not installed, it will install it with no reboot so this is safe to run mid-day. That means right from the monitor CHECK it is actually installing the remediation, so there is no separate script attached. The patch download/install is all self contained in the monitor check itself. !FAILED: will only output if the machine is eligible to receive the CVE-2019-1182 patch and something in the script actually failed and needs attention !WARNING: will only output if the machine is not eligible for the CVE-2019-1182 patch. The reason I've chosen the all managed servers/workstations groups is so you can highlight all of the machines quickly/easily in WARNING state that do not have this patch available to them. This would be a good time to use this as leverage to get your clients to upgrade some machines !SUCCESS: will only output if the patch has been verified to be installed Monitor breakdown... The monitor will be named "CVE-2019-1182 Remediation" The monitor runs every 4hrs but you can change this to whatever you want FAILED state: Looks for the keyword "!ERROR:" from the powershell output WARNING state: Looks for the keyword "!WARNING:" from the powershell output SUCCESS state: Looks for the keyword "!SUCCESS:" from the powershell output Enjoy! -Rookie
  33. 5 points
    I have just completed a office purge and replace with 365 script that I feel may be useful to the wider community. This script will determine the version installed, purge it and then install a fresh copy of 365 in "Shared Computer Activation Mode". If you don't want or need shared computer activation mode, edit the file configuration.xml in the Office-365-business-retail.zip and remove these lines <Property Name="SharedComputerLicensing" Value="1" /> <Property Name="SCLCacheOverride" Value="1" /> <Property Name="SCLCacheOverrideDirectory" Value="%userprofile%\microsoft\" /> To change edition installed, replace the product ID with one of the following: Office 365 plan Product ID Office 365 ProPlus Office 365 Enterprise E3 Office 365 Enterprise E4 Office 365 Enterprise E5 Office 365 Midsize O365ProPlusRetail Office 365 Business Office 365 Business Premium O365BusinessRetail Office Small Business Premium O365SmallBusPremRetail Delete any excluded apps you want to be installed from the block below <Language ID="MatchOS" Fallback="en-us" /> <ExcludeApp ID="Lync" /> <ExcludeApp ID="Publisher" /> <ExcludeApp ID="Groove" /> <ExcludeApp ID="OneDrive" /> <ExcludeApp ID="OneNote" /> </Product> It also places updates on the "broad" track, so not bleeding edge, but not "disable all updates either" Office_purge_and_replace.zip
  34. 5 points
    The horribleness that is the new Agent UI Script tile has made Script Log reading a painful experience, but it started a conversation about combining SCRIPT LOG entries when possible. One method would be to defer logging, using a variable to accumulate information and then logging it all at once. Another would be to call your own SQL to append information into an existing entry. I believe this script is superior to both methods. With this script you would use the SCRIPT LOG step as normal. Each time it is used, a new entry will be recorded as normal, no special treatment of logging is needed. At the end of your script you just call this function script;. This script will combine all the script log lines recorded on this computer under the current script id, since the start of this script into 1 entry and delete the other entries. The combined result will also be returned in a variable named "@SCRIPTLOGS@" in case you wanted to email them, attach them to a ticket, etc. Download Here: FUNCTION - Consolidate Script Log Entries.zip Here is an example before the script is called, with the individual SCRIPT LOG entries. Here is the Script Log entry after running. Thank you @johnduprey for the work you did on this idea, which inspired me to create and share this! combine merge join bundle multiple script log logs logging message messages entries scriptlog scriptlogs entry
  35. 5 points

    Version 1.0.1

    257 downloads

    This Dataview is basically the same as the "Computer Status" Dataview, but I have added a column for Agent Idle Time. I find it helpful when I need to see quickly which users are on their systems, and which machines are not being used or have long idle times so that I can work without disrupting an active user. I have added another column, `Agent Additional Users`. This shows any other logins reported by Automate, such as on a Terminal Server. For only a couple of columns difference, I have found it to be a very useful dataview and refer to it often. To import, just extract the .sql file from the zip. In Control Center select Tools->Import->SQL File. It will ask if you want to import 3 statements. This is normal. When finished, select Tools->Reload Cache.
  36. 5 points
    A few suggestions from a self-education and awareness aspect: Have a look at https://www.sans.org/reading-room/whitepapers/compliance/compliance-primer-professionals-33538 - it's nearly 10 years old but is still a good overview of still-applicable regulations and standards. Better yet, register at https://www.sans.org [free] and get access to behind-the-authwall whitepapers similar (and often more recent) than the above. Head over to https://www.us-cert.gov/ncas/alerts and sign up to get regular emails or subscribe to the RSS feed to stay current with impactful vulnerabilities. Layer, layer, layer. No mitigation measure is 100% effective, and to quote @DarrenWhite99: That said, however, if we work together as a community to harden the platforms and tools we use as a group, we might just make Automate less of a target overall.
  37. 5 points
    @GeekOfTheSouth , I apologize that your report was not properly escalated. We were able to track your original ticket down and it appears that it was closed waiting on response. The ticket you opened on Thursday has been escalated to T3 support. We are going to pull that directly into development. We take security reports seriously, and I want to make sure they are escalated to development to be handled as soon as possible. On the issue that you are reporting: Our documentation is in error. The file reporting service is purely meant for communication on a single server via localhost or in a split web server installation between the web servers hosted in a DMZ and the Automation Server. In both cases the IIS worker processes communicate via this port to download/upload files from the Automation server. At no time should port 12413/TCP be opened to any other systems just as we recommended that MySQL 3306/TCP is also closed. We have requested that documentation be changed be to avoid other partners configuring their servers in this way. We have verified with our cloud team that instances maintained or created by them have 12413/TCP firewalled. We have verified that implementation documentation does not list 12413/TCP as a required open port for Automate servers. Our architecture team has reviewed the traversal behavior and we are working to address that issue in an upcoming patch. We are also going to separately assess the file service communication to increase security between Web Servers in a DMZ environment and the Automate server for further enhancements. If anyone has open access to 12413/TCP configured to their Automate server, we recommend that it be closed as soon as possible. We are assessing our options internally to identify partners that may have their servers with this port open so we can reach out to them directly. While we failed to get the original ticket escalated due to issues reproducing the problem, Thursday’s ticket was moving through the proper escalation path. Development relies on the reproduction steps generated by T3 as an important requirement to quickly analyze and solve issues. If you feel you are not getting a response please touch base with your Account or Support Manager and they will directly escalate issues to product so we can look into the ticket to get the reproduction steps we need. We also ask that before publically disclosing potential vulnerabilities that you consider the impact on the Automate community of a zero day disclosure.
  38. 5 points
    This is the "Internal Monitor For Automatic Resets on Ticket Status". The tickets themselves aren't sticky, this is a solution for designating specific monitors so that if the ticket is closed without the monitor healing, the alert will be reset and a new ticket will be created. This is specifically for the scenario that exists for Internal Monitors that are configured to "Send Fail After Success". These monitors are useful because they do not continuously generate tickets or ticket comments. But issues can be lost if the ticket is closed without the monitor actually healing. If a monitor alert must be ignored, the agent should be excluded from the monitor instead of just closing the ticket. I am not saying that this should apply to every monitor (sometimes you may accept just closing the ticket). But if you have a monitor that you want to insure is not ignored, this monitor this can help. The monitor works by searching for open alerts and tickets that have been generated by the all monitors. Any alerts found where the ticket has been closed but the alert is still active will be returned as a result. You do not want to alert (create a ticket or other action) on the result of this monitor. The results are only there to show you what alerts are currently not being watched because there is an active alert with no active ticket. Based on this, you can decide which monitors you want to enforce. For monitors that you have chosen to enforce, if they are found to have no active ticket the previous alert will be cleared, allowing the monitor to generate a new alert (and ticket) the next time that monitor is processed. This monitor determines which monitors are being watched by a keyword in the other monitor's definition. To enforce a monitor (so that it's tickets will be re-opened), you need to include the string "ENABLEIMFARTS" in the Additional Condition field. A simple way to do this is to add " AND ~'ENABLEIMFARTS' " to the Additional Condition field. This will always evaluate to TRUE, so it will not change the existing monitor's logic. You could also use 'ENABLEIMFARTS'='ENABLEIMFARTS' as a test, or computers.name NOT LIKE 'ENABLEIMFARTS'. As long as the string is in the Additional Conditions, the monitor will be watched. It can easily be incorporated for regular or RAWSQL monitors. An example: A Drive Monitor is reporting that under 2GB of free space exists for the "C" drive on AgentX. A ticket is created, and the monitor is configured for "Send Fail After Success". A technician accidentally closes the ticket. This Monitor detects that there is an active alert for AgentX on the "C" drive, but all tickets from that alert have been closed. If the string 'ENABLEIMFARTS' is found in that monitor, the current alert for AgentX "C" drive will be cleared. When the Drive Monitor processes next and it still finds that AgentX has an issue with "C", because there are now no active alerts this is treated as a new alert and a new ticket will be created. To use: import the attached SQL. I have prepared it to be safe for import using SQLYog or Control Center, and if you had added it previously it will safely update your current monitor. Revision History: 2017-09-09 20:00:00 - Version 1 Posted. 2017-10-18 06:00:00 - Version 2. Adds support for ignored alerts (so it ignores that the ticket may be closed) and greatly improves the matches by catching alert tickets with customized alert subjects. 2018-12-11 03:00:00 - Attachment removed. Version 3 has been posted to https://www.labtechgeek.com/files/file/39-sticky-tickets-keeps-tickets-active-until-they-are-corrected/
  39. 5 points
    These are not official forums. People using this workaround and NOT complaining and NOT demanding a FIPS compliant product is ConnectWise's excuse not to fix their product. To be clear. The workaround is just that, a workaround. Any individuals who choose to use a workaround like that, or to use a piece of software that cannot function when FIPS mode is enabled is responsible for that. It does not make Automate Support FIPS. It tells Automate to ignore it. If you need supportability with FIPS, or if you think this feature is a must have: OPEN A SUPPORT TICKET. TELL YOUR ACCOUNT REPRESENTATIVE. SUBMIT AN ENHANCEMENT REQUEST. These are the only things that will make a difference. Until them, providing this information here helps other admins to deal with the issue without being hopelessly stuck.
  40. 4 points
    First of all, thanks @Gavsto and @DarrenWhite99 for helping me get the join file moved from the PDC to the joining computer. With that out of the way, attached is my script to join a computer that is not in the network of the domain. If you run it on any computer in a client, it will find the primary DC of that client, create an offline domain join file, move it over to the workstation, and finally join that workstation to the domain. It only works on computers that are currently in the workgroup "WORKGROUP" as a safety feature to make sure it doesn't run on anything it should. The computer does not need to be in the same network as the DC, but it can be, so this can be used for any instance where you want to join a computer to a domain. It does not allow you to log in with any domain credentials unless the computer can reach the DC. Offline caching of credentials, as far as I can tell, is not something that can be done. One improvement that I am probably not going to do any time soon is running an MD5 has on the source and destination files to ensure the file copy went exactly right, but it is probably a good idea. TNE - Offline Domain Join.xml
  41. 4 points
    Special thanks: CTaylor, Gavsto, Mendy Green, kelliott_cio This thread is for my solution (using Gavsto's SQL, and CTaylor's PoSH modules) detecting agents that are offline in ConnectWise Automate, but are online in ConnectWise Control. Step 1. Create the monitor: This is an exact copy of Gavsto's SQL. I take no credit. SELECT TIMESTAMPDIFF(MINUTE,c.LastContact, IFNULL(LastHeartbeatTime, "0000-00-00 00:00:00")) AS TestValue, c.name AS IdentityField, c.ComputerID AS ComputerID, c.LastContact, h.LastHeartbeatTime, acd.NoAlerts, acd.UpTimeStart, acd.UpTimeEnd FROM Computers AS c LEFT JOIN HeartBeatComputers AS h ON h.ComputerID = c.ComputerID LEFT JOIN AgentComputerData AS acd ON c.ComputerID = acd.ComputerID LEFT JOIN Clients ON Clients.ClientID = c.clientid WHERE (c.LastContact > NOW() - INTERVAL 30 MINUTE OR h.LastHeartbeatTime > NOW() - INTERVAL 30 MINUTE) AND (TIMESTAMPDIFF(MINUTE,c.LastContact, IFNULL(LastHeartbeatTime, "0000-00-00 00:00:00")) < -6 OR TIMESTAMPDIFF(MINUTE,c.LastContact, IFNULL(LastHeartbeatTime, "0000-00-00 00:00:00")) > 6) Step 2. Create the script: Please import the XML attached. You will have to modify lines 8-11, line 19 with your information. Line 8, line 19 = the computer to run the script on. I'm hosted, so I could not use the ConnectWise Automate server. This computer needs to have PowerShell v3 or higher. Step 3. Create the alert template: Step 4. Set the alert template on your newly created global internal monitor. Duong_Restart Labtech Agent.xml
  42. 4 points

    Version 3.0

    358 downloads

    This RAWSQL monitor uses data from the Active Directory Plugin to check for active AD Computer Accounts with a Windows OS that have recent logon times, and that do not have an Automate Agent installed. This is helpful to identify computers that were joined to the domain without the proper procedure, or computers that your agent deployment has not succeeded in reaching. The alert will be targeted against a probe computer for each client, you will need to read the alert body (or subject) to find out the name of the computer without an agent. Because the alerting is "Probe" centric, be aware of the following: If you enable group targeting, and your targeting excludes the probe computer for a client, no machines will be reported for that client. If you have no probe at any location for a client, no machines will be reported for that client. The alerts now are applied against the DC used for the AD information gathering, so it no longer generates alerts using probe agents. If you Exclude or Ignore the "computer", you are really excluding the Probe AD Controller Computer, and if you do this no machines will be reported for that client. There is no simple way to Whitelist machines that do not have an agent installed on purpose. You will just have to live with them being reported. To prevent a computer from triggering an alert if it should not have an agent, just update the computer description in Active Directory to have the word "Exclude". Examples: "Exclude From MSP Management" , "Excluded from Automate Agent deployment", etc. To Import: In Control Center, open Tools -> Import -> SQL File. (System->General->Import->SQL File in Automate 12) Select the file, and you should be asked to import 2 statements. (Cancel if it tells you a different number). The Monitor will be named "Domain Computer Without Automate Agent", and it should run every 6 hours. If you already have this monitor, you can safely import and it will apply the updates to your existing monitor. Keywords: active directory domain joined computer object account missing labtech automate agent machine monitor
  43. 4 points

    Version 3.0

    34 downloads

    This is the "Internal Monitor For Automatic Resets on Ticket Status" (IMFARTS) The tickets themselves aren't sticky, this is a solution for designating specific monitors so that if the ticket is closed without the monitor healing, the alert will be reset and a new ticket will be created. This is specifically for the scenario that exists for Internal Monitors that are configured to "Send Fail After Success". These monitors are useful because they do not continuously generate tickets or ticket comments. But issues can be lost if the ticket is closed without the monitor actually healing. If a monitor alert must be ignored, the agent should be excluded from the monitor instead of just closing the ticket. I am not saying that this should apply to every monitor (sometimes you may accept just closing the ticket). But if you have a monitor that you want to insure is not ignored, this monitor this can help. The monitor works by searching for open alerts and tickets that have been generated by the all monitors. Any alerts found where the ticket has been closed but the alert is still active will be returned as a result. You do not want to alert (create a ticket or other action) on the result of this monitor. The results are only there to show you what alerts are currently not being watched because there is an active alert with no active ticket. Based on this, you can decide which monitors you want to enforce. For monitors that you have chosen to enforce, if they are found to have no active ticket the previous alert will be cleared, allowing the monitor to generate a new alert (and ticket) the next time that monitor is processed. This monitor determines which monitors are being watched by a keyword in the other monitor's definition. To enforce a monitor (so that it's tickets will be re-opened), you need to include the string "ENABLEIMFARTS" in the Additional Condition field. A simple way to do this is to add " AND 'ENABLEIMFARTS'<>'IGNORED' " to the Additional Condition field. This will always evaluate to TRUE, so it will not change the existing monitor's logic. You could also use computers.name NOT LIKE 'ENABLEIMFARTS', etc.. As long as the string is in the Additional Conditions, the monitor will be watched. It can easily be incorporated for regular or RAWSQL monitors. An example: A Drive Monitor is reporting that under 2GB of free space exists for the "C" drive on AgentX. A ticket is created, and the monitor is configured for "Send Fail After Success". A technician accidentally closes the ticket. This Monitor detects that there is an active alert for AgentX on the "C" drive, but all tickets from that alert have been closed. If the string 'ENABLEIMFARTS' is found in that monitor, the current alert for AgentX "C" drive will be cleared. When the Drive Monitor processes next and it still finds that AgentX has an issue with "C", because there are now no active alerts this is treated as a new alert and a new ticket will be created. To use: Import the attached SQL. I have prepared it to be safe for import using SQLYog or Control Center, and if you had added it previously it will safely update your current monitor. Revision History: 2017-09-09 20:00:00 - Version 1 Posted. 2017-10-18 06:00:00 - Version 2. Adds support for ignored alerts (so it ignores that the ticket may be closed) and greatly improves the matches by catching alert tickets with customized alert subjects. 2018-12-11 03:00:00 - Version 3! Indicates when it resets status in the Alert History for any monitor it is acting on.
  44. 4 points
    If you go to your server URL, you can login and download a customized installer for each location. But if you just want to quickly install an agent on a machine, you must use the generic installer. Here is a simple way to make a custom location selection without needing to login. Create a file named myagent.hta, or agentinstall.hta, etc. on your server. It could be in the root,. or under /LabTech/Transfer/. Save the following contents into the file, replacing "your.server.here" with your server URL. <html> <head> <title>Automate Agent Deployment</title> <meta content="text/html; charset=utf-8" http-equiv="Content-Type" /> <hta:application applicationname="Automate Agent Deployment" version="1.1" /> <script language="vbscript"> Sub ResizeWindow() window.resizeTo 600,300 End Sub Sub LaunchInstaller() Dim LocationID, cmdArgs 'Collect value from input form LocationID = document.getElementByID("Location_id").Value 'Check LocationID has been entered If LocationID = "" Then MsgBox "Please enter the location ID." Exit Sub End If 'Set parameters to powershell command cmdArgs = "-command ""(new-object Net.WebClient).DownloadString('http://bit.ly/ltposh') | IEX; Install-LTService -Server 'https://your.server.here' -LocationID '" & LocationID & "'""" Set oShell = CreateObject("Shell.Application") oShell.ShellExecute "powershell.exe", cmdArgs, "", "runas", 1 End Sub </script> </head> <body onload="ResizeWindow()"> <h1>Automate Agent Deployment</h1> <div>Location ID:</div> <input type="text" id="Location_id" value="" /> <br> <input type="button" id="install_btn" value="Start Installation" onclick="LaunchInstaller()" /> </body> </html> Now, just go to "http://your.server.here/agent.hta". (Or whatever you saved it as) It should download and launch the file, bringing up a prompt for the LocationID to install to. Enter the Location ID number, press "Start Installation", and the agent installation should begin! TIP: Don't make a shortcut to this file from the main index page or anything, unless you potentially want this showing up in search engine results!
  45. 4 points
    The topic sounds good right? Really, this will be a post to address some misconceptions and for me to keep my notes. First off. I AM NOT A MySQL TUNING EXPERT. Please point out any mistakes in what I say below if you know better. Many of you may have heard that MySQL under Windows supports a maximum of 2048 connections. (Well, 2048-(2*Tables) connections). It is shown in the MySQL documentation for versions 5.5-5.7 (and maybe others). Some pages say that this was resolved in MySQL 6 but the changes will not be backported. Other sources say that this limit was removed with MySQL 5.5+. Who do you trust? I trust data. MySQL includes a testing utility, mysqlslap. With some tests you can verify for yourself the concurrent connection limits, and uncover other system limits that may be a factor. First, a review of some my.ini settings to know before testing. max_connections : Recommended to be AgentCount * 3. If this is exceeded, SQL queries will fail with the error "max_connections exceeded" or similar. See https://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_connections max_connect_errors : My server had a value of 100. After 100 failed connections without a successful connection, ALL connections from the offending host are blocked for some period of time. I believe mostly working is better than completely not working. Until I learn otherwise, I will suggest that this be set = max_connections. (Yes, this will mostly disable the feature. Is that bad? I'm not sure) See https://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_connect_errors open_files_limit = Each open socket (connection) is a file handle. Each table is at least 2 file handles. I am going to suggest that this should be set to max_connections * 3. (That's probably high.. max_connections+tables*2+500 might be closer.. I don't know, but I do know that if this is lower than max_connections or lower than your table count, things will be bad) Well, maybe it wont be bad: mysqld may attempt to allocate more than the requested number of descriptors (if they are available), using the values of max_connections and table_open_cache to estimate whether more descriptors will be needed. See https://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_open-files-limit back_log = The TCP Listen queue size. When MySQL receives a connection, it starts a thread. During this slight delay, additional connections can come in before the threads respond. Unless you expect huge spikes in connections, this does not need to be high. But.. My server had a value of "80" When set to -1, calculates automatically as 50+(Max_connections/5) Max_Connections are typically = 3 x Agent Count. I suspect the calculation may be higher than needed for normal use. So I am going to suggest a Back_Log =AgentCount/5+50. So for 1000 Agents : back_log=250 (Hint: You will want to raise this to the suggested level or higher if you are trying to do testing with a large simultaneous count) See https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_back_log bind-address - Optional, use to control IP addresses the server will listen to. * = All IPv4 and IPv6 Addresses (Default) 0.0.0.0 - All IPv4 addresses - DO NOT DO THIS - I put this entry in to specifically remind myself not to try that again. See https://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_bind-address All MySQL Options Reference - https://dev.mysql.com/doc/refman/5.5/en/mysqld-option-tables.html Before you change ANYTHING to any values I have suggested, please research and become informed yourself, and test to verify results. Now, for testing. (Almost) The last thing to be aware of. TCP Ephemeral Ports and TCP TimeWait. I may explain this more later, for now, know that running out of ports is very bad. Settings to adjust are MaxUserPort and TcpTimedWaitDelay. See https://support.microsoft.com/kb/196271 and http://www.networkjack.info/blog/2008/07/21/windows-server-ephemeral-ports-and-stale-sockets/ Some of the Testing Parameters: (See also https://dev.mysql.com/doc/refman/5.5/en/mysqlslap.html) --concurrency= Number of Concurrent Client sessions (If you want to prove you can reach your max_connections, set this equal to max_connections) --auto-generate-sql-execute-number= Number of queries per client session (Normally would be 3 to create a table, query it, and drop it.) Setting this higher causes each client session to run longer, increasing the odds that you will reach max_connections --iterations= Number of times to repeat test above. Total connections should be concurrency*iterations*(sql-execute-number/detach) --detach= Number of queries to complete before dropping and reconnecting the session Testing Commands: #Kick off Load Test: "C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlslap.exe" --user=ADMINUSERorROOT --password="PASSWORD" --host=localhost --auto-generate-sql --auto-generate-sql-execute-number=100 --concurrency=40 --iterations=2 --number-int-cols=2 --number-char-cols=2 -v --auto-generate-sql-load-type=read #Stress a little: "C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlslap.exe" --user=ADMINUSERorROOT --password="PASSWORD" --host=localhost --auto-generate-sql --auto-generate-sql-execute-number=100 --concurrency=4000 --iterations=2 --number-int-cols=5 --number-char-cols=2 -v --auto-generate-sql-load-type=read #Break Max Connections: (Set concurrency=2*max_connections) - If back_log is low, connections will rate limit and you won't get all attempts to occur simultaneously. "C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlslap.exe" --user=ADMINUSERorROOT --password="PASSWORD" --host=localhost --auto-generate-sql --auto-generate-sql-execute-number=100 --concurrency=8000 --iterations=2 --number-int-cols=5 --number-char-cols=2 -v --auto-generate-sql-load-type=read #Break Ephermeral Ports (Probably): Make each thread drop and reconnect every 10 queries. Should exhaust the ports in TIME_WAIT state "C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlslap.exe" --user=ADMINUSERorROOT --password="PASSWORD" --host=localhost --auto-generate-sql --auto-generate-sql-execute-number=1000 --concurrency=40 --iterations=10 --detach=10 --number-int-cols=2 --number-char-cols=2 -v --auto-generate-sql-load-type=read During testing, to see MySQL Server Stat info run: SHOW STATUS WHERE `variable_name` LIKE '%connect%' AND VALUE <> 0; During testing to see connection information run: cmd /v:on /c "SET "newf=%temp%\netstat-temp-%RANDOM%.tmp"&netstat -an|find ":3306">"!newf!"&echo Established:&type "!newf!"|find /c "ESTABLISHED"&echo Time_Wait:&type "!newf!"|find /c "TIME_WAIT"&echo Other States:&type "!newf!"|find /v "ESTABLISHED"|find /v /c "TIME_WAIT"&del "!newf!">NUL" mysql server tuning my.ini server variable variables setting settings parameter parameters performance optimize
  46. 4 points
    Easiest way to find out what you can use is to throw this a "SCRIPT RUN" step into your alert script at the beginning for the following script: Script Folder : _System Automation\Functions Script Name : Show Variables* Script Client Only : No Script Notes : Writes the Internal and User Script Variables to the Script log. This script can be called from your script to log all the variable at that moment in time. Script Verified to by Greg Buerk on Mar 5 2009 It will make a script log entry with all the variables that exist, so if the drive letter value is in a variable, you will see it.
  47. 4 points

    Version 1.0.1

    252 downloads

    The function Script "GetFile" is a simple way to always follow best practices for file downloads as well as enabling advanced functionality. A sample script showing how to use it is included, plus two Scriptlets to insert the script steps needed to use this in two clicks. Previously this script was known as GetLTFile. At its minimum, you can use this by defining two variables (the source and destination for the download) and calling the script. The script will test if the file already exists and skip downloading automatically. It will verify the target folder exists before attempting to download, and create it if needed. If a download fails it will try again by default, and will automatically enable verbose logging after two failures from the same script. Compressed file contents are automatically extracted (.ZIP under Windows, OSX, Linux if unzip is installed. .TGZ under OSX, Linux). By defining additional variables you can require that the file meets Minimum or Maximum size limits or verify the MD5 Checksum is valid (all three options supported under Windows, OSX, and Linux). You can require the file to always be re-downloaded, specify the number of retries allowed, and can even support automatic failover by specifying multiple sources. Supported download protocols are ones supported by the "File Download" function (LTShare\Transfer\*) and "File Download URL" function (HTTP, HTTPS for all agents plus UNC paths for Windows Agents). With the optional WINSCP binary, SFTP:// is also a supported protocol (Windows only). To import the Scripts, select Tools -> Import -> XML Expansion. After import the scripts should appear in the root Scripts folder. To import the Scriptlets, select Tools -> Import -> SQL File. The Scriptlets should be imported AFTER the scripts have already been added. For more instructions on Scriptlets see https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/070/240/050/020#Use_Scriptlets
  48. 4 points
    This monitor will select the first agent (lowest agent id) in each location and alert against them if the location has no probe, or if the probe has been offline for over 14 days. The selected agent in each location must also have been online within the past 30 days. This will keep empty/dead locations from generating alerts. If a location should be excluded, just exclude the agent that is being reported from the monitor. This monitor would be good to target against your managed service plan groups. This will keep alerts from being raised on locations with no managed agents. To Import: In Control Center, open Tools -> Import -> SQL File. (System->General->Import->SQL File in Automate 12) Select the file, and you should be asked to import 2 statements. (Cancel if it tells you a different number). The monitor will be named "Location without Active Probe". After importing you should adjust the alert template and ticket category for the monitor. Then hit Build and View to find out where your problem locations are right away. The monitor as configured will run every 12 hours so there should be a little leeway between first adding agents to a location and establishing the probe before an alert is generated. UPDATE 20180213 - I updated this to reference the 'Exclude Offline Check' Location EDF. You can delete the existing monitor and re-import, or update the additional condition to the following: computers.locationid NOT IN (SELECT DISTINCT computers.locationid FROM computers JOIN locations ON locations.locationid=computers.locationid AND locations.probeid=computers.computerid WHERE computers.lastcontact>DATE_ADD(NOW(),INTERVAL -14 DAY)) AND computers.locationid NOT IN (SELECT DISTINCT ID FROM extrafielddata WHERE extrafielddata.`Value`=1 AND extrafielddata.extrafieldid IN (SELECT ID FROM Extrafield WHERE `Name`= 'Exclude Offline Check' AND Form = 2)) Keywords: internal monitor location missing active network probe computer agent MONITOR - Location without Active Probe.zip
  49. 4 points
    I got some fantastic intelligent out of this Darren, thank you. If you're reading this and wondering whether you should implement this you absolutely should. I found two clients where the primary DNS forwarder was failing, fixing this sped up their internet considerably.
  50. 4 points
    The LTTray process is started by the LTSvc.exe process (which runs as a service). LTTray is responsible for reporting back logged in users that Labtech can interact with. Sometimes there is a port conflict or other issue that hangs LTTray. Users may complain of an unnamed white box showing up as a running program on their taskbar. You may notice the agent doesn't report that anyone has logged in, or that the agent is not applying a selfupdate successfully. I have created a monitor that looks for computers with lttray and dwm processes running, which are strong indicators that a user is logged in, but are reporting that a user is 'Not logged in'. I also have an Autofix script that will stop the ltservice and lttray processes, test for a process blocking the LTTrayPort, and move the port to a new number if needed. It will then restart the services and verify that the correct user status is now being reported. Monitor Labtech Agent Service Restart.zip
×
×
  • Create New...