Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/11/19 in all areas

  1. 7 points
    We, as a team, have noticed over the last couple of days that there has been some pretty heavy criticism of ConnectWise across forums, Slack, and social media outlets like Reddit in relation to the problems with the Automate binaries. A lot of the criticism has been relatively unfounded. The problems found within the product are not issues that we believe could have been picked up by any standard QA or testing measures - bearing in mind that our team member only stumbled on to this by complete accident. In regards to the criticism surrounding ConnectWise's notification to partners, it's much easier for the leaders in a community like ours to notify our members as we are not bound by the processes, procedures and multiple teams required to get a fix like this out successfully at a corporate level. Though there are clear areas for improvement in QA and testing processes, every member of the MSPGeek team was impressed at the speed of response and subsequent delivery of a fix by the Automate team. Our community was founded and based upon the idea of mutual assistance, open sharing, good communication and, for the past 6 years, has provided a trusted platform for users to support each other while helping ConnectWise make the product better. Let's continue to help ConnectWise by providing constructive criticism while helping them, and each other, through this bump in the road; it's the MSPGeek way.
  2. 2 points
    Most people should be using the default. User-agent: * Disallow: / I'm sure several people already understand what robots.txt is and does, but I'll elaborate a bit for those that might be out of the loop. When web robots crawl your site the standard is to first look for <domain>/robots.txt to see if there are any rules that it needs to abide by. This is an industry standard that's been around since the 90s. Some websites use it to prevent certain sections of their website from being indexed online and therefore not searchable in a search engine. In Automate's case we don't want anything indexed, so you'll want to make sure that your robots.txt matches the above text. Granted there are exceptions to every rule, so it's possible that someone out there has a good reason to have a more customized robots script, but I can't think of any reason. This is obviously an oversimplification, so I'll link a more detailed overview below. For more information: http://www.robotstxt.org/robotstxt.html
  3. 2 points
    You need to make super sure that the variable does not have a linefeed character in it. (Common outcome when setting to %shellresult%). Using a Script String Regex function to set the variable to amatch for the regex (no quotes) "[^\r\n]*" will give you one line of text with no linefeed as the result.
  4. 1 point
    Right click LTSvcMon.exe and go to properties GO to certificates and view the properties of sha1, if its not trusted follow these steps: https://secure.globalsign.net/cacert/Root-R1.crt Install this cert in your computer store Don't let windows choose a spot Manually set it to Trusted Root Certification Authorities. This fixed our issue. OR Set this key to 0: HKLM\Software\Policy\Microsoft\SystemCertificates\AuthRoot\DisableRootAutoUpdate Reinstall automate agent
  5. 1 point
    Following the MSPs who were impacted by this https://www.reddit.com/r/msp/comments/ani14t/local_msp_got_hacked_and_all_clients_cryptolocked/ a number MSPGeekers had an impromptu call to discuss security in general and what best practices we all followed to ensure our systems are as secured as possible. This prompted an idea from @MetaMSP that we have a place where best practices can be defined - things that we can put in place to make our RMMs as secure as possible. I will update this with a list of generally agreed upon methods based on the discussion. How can I apply better security? 1) Enable Multi-Factor Authentication. This is a functionality that already exists within Automate in the form of plugins, and for the effort to implement it gives a massive boost to security. As an MSP every single account you have should have 2FA on it 2) Do not publish your Automate URL publicly - anywhere. If you are linking to your Automate site, or even your Control site from anywhere on your website - remove it and ensure to the best of your ability it is removed from search engine indexes. Attackers can find servers like this on Google using very simple methods and you will be one of the first they attempt to attack. 3) Review all plugins/extensions installed and disable, remove the ones you no longer use. Having an added benefit of speeding your system up, each of these adds a small risk profile as you are relying on third party code being secure running in the background. Removing plugins you no longer use or need reduces the surface area of attack. 4) Review ports you have open and close ports that are not needed. You will find the ConnectWise documentation here on what ports should be open. https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/020/010/020 . Don't just assume this is right - check. Ports like 3306 (MySQL DB Port) and 12413 (File Redirector Service) should absolutely not be opened up externally. 5) Keep your Automate up to date. ConnectWise are constantly fixing security issues that are reported to them. You may think you are safe on that "old" version of Automate/LabTech, but in reality you are sitting on an out-of-date piece of software that is ripe for someone attacking 6) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP 7) DO ensure that robots.txt is properly set on your Automate server. If you can Google your Automate server's hostname and get a result, this is BROKEN and should be fixed ASAP. <-- Courtesy of @MetaMSP 8 ) Firewall Blocking. I personally block every country other than the UK and the USA from my Automate Server on our external firewall. This greatly reduces your chance of being attacked out of places like China, Korea, Russia etc. 9) Frequently review the following at your MSP Check that the username and passwords set are secure, better yet randomise them all and use a password manager Treat vendors/services that don't support or allow 2FA with extreme prejudice. I will happily drop vendors/services that don't support this. If you 100% still need to keep them, setup a periodic review and pressure them secure their systems because you can almost guarantee if they are not doing customer logins properly that there will be other issues Setup a periodic review to audit users that are active on all systems. PSA, Office365, RMM, Documentation Systems (ITGlue/IT Boost) Audit 3rd Party Access, Consultants and Vendor access to your systems <-- Thanks @SteveIT 10) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP
  6. 1 point
    This thread is meant to be an overview for each individual Version 12 patch. We know that all patches are not created equal and there are some patches that we should avoid like the plague (I'm looking at you 11.18/12.0). The top post will be consistently updated with known bugs/issues for each patch, so you should know at a glance if you feel comfortable deciding if you want to upgrade or hold out for the next patch. At the time of creating this - Patch 12.5 (12.0.5.327) has just been released, so the details of previous patches might not be as detailed as patches 12.5 and newer. Blue = Must install ASAP (unless skipping to a newer patch) Green = Stable Patch Orange = Relatively Stable Patch Red = Unstable Patch Official ConnectWise Automate Patch Notes for all released patches can be found here ***Install as soon as possible.*** Patch 2019.2 (19.0.2.58) - ***If you have upgraded any time in the last few months, then you HAVE to install this patch. Your server is a ticking time bomb*** Benefits/Features: This patch includes one of the most important features you can ask for... a server that will still be alive in a month. Known Bugs/Issues: Doesn't matter - This patch is required and bugs are a secondary priority. (bugs still will be added here as they are found, but until another patch succeeds this one, this patch is a must.) ^^^Install as soon as possible.^^^ Patch 2019.1 (19.0.1.38) - 2019 Off to a great start. Patch was pulled by CW and re-released 3 days later. Benefits/Features: New Versioning - This change was put in place to further mirror the rest of the ConnectWise Suite. To upgrade to this patch you must currently have a v12 patch installed. You should not attempt to upgrade from v11 or older without first moving to any of the patches listed here (preferably v12 patch 12) Known Bugs/Issues: 19.0.1.35 was the original version released to the masses, but a newer version was pushed out shortly after. If you're installing or have installed this patch be sure that it ends in .38 not .35 .36 or .37 Seems like there might be a problem with the new network probe on this patch. (source + workaround)
  7. 1 point
    Thank you DarrenWhite99. That solved the issue. Below are the changes that I made to get it working per your suggestion.
  8. 1 point

    Version 2.2.0

    237 downloads

    This monitor identifies the current agent version for OSX, Linux and Windows Agents. Every 5 minutes the monitor checks for agents that are below the current version and issues the Agent Update command to a small batch of online agents. It will only issue the command once per day for any particular agent, and only if there are no pending/executing commands already in the queue for the agent. It will dynamically calculate the concurrency limit so that it can process all of your agents in about 12 hours time. This spreads the load out while quickly bringing all agents up to date as soon as possible. Commands from before the agent was restarted are ignored, so it can update right away after a reboot even if it already failed to update within the past 24 hours. The monitor will only report verified update failures, so you can use this monitor to generate a ticket or to run your own update remediation script since you know the self-update has failed. Even though only a small number will be asked to update each time it runs, the monitor will report ALL online, out of date agents. You do not need to run a script or generate a ticket or do anything else, the monitor is issuing the update commands directly in SQL. If you don't generate a ticket, you should check the monitor periodically and see which agents are reported since they are failing to update and will need some sort of manual intervention. This file has been specially prepared to be safe for importing in Control Center through Tools -> Import SQL. The monitor will be named "CWA AGENT - Needs Update". If you have imported a previous version of this monitor, most of your customization's (Monitor Name, Alert Template, Ticket/Report Category, etc.) will be preserved but the necessary changes will be made automatically. Unlike prior versions of this monitor, you can safely "Build and View" and it will not continue to add more update commands.. Pending/In Process update commands will reduce the concurrency limit so that it never overloads the system with too many update commands at once. FAQ: My agent is outdated but the monitor doesn't show it. Why? These are the criteria for an update command to be issued. Until the monitor tries to update an agent, it will never report it as failing to update. If any of these conditions are not met, this is why you aren't seeing the agent update command: Is the agent version out of date? Is the agent online? (lastcontact within past 15 minutes) Are there no commands currently pending or executing? Have no update commands been issued within the past day for the current version? Have fewer than LIMIT (a custom value dynamically adjusted for your environment) update commands already been issued and have not completed? After answering YES to all of these checks, the monitor will issue the command to update the agent. It will only permit up to LIMIT update commands to be pending/executing at once, so if you have a large number of agents to update it might be awhile (up to 12 hours) before any particular agent is asked to update. Once an agent has been asked to update, the following criteria determines if the agent will be reported as failed: Has an update command been issued within the past day? Is the agent online? (lastcontact within past 15 minutes) Did the update command report failure? OR has the update command been executing/completed for over 2 hours? Is the agent version still out of date? After answering YES to all of these checks the monitor will report that the agent has failed to update. Why won't my agent update? This can be caused by many reasons. Some common ones: Insufficient Agent resources (low ram/disk space/available cpu) Another software install is in progress. The agent is pending a reboot. A file cannot properly extract from the update. (Check for ReadOnly attributes or invalid file/folder permissions for "%WINDIR%\Temp\_LTUpdate") A file is locked and cannot be replaced. (LTSvc.exe, LTTray.exe, etc. might fail to stop. A .DLL might be open or locked by AV or a third party program.) Nearly all of these are resolved with a reboot, so that is a good troubleshooting step after checking the file attributes/permissions. What alternative methods are available to update my agent? Look for this section to be expanded on in the future. LTPoSh has an Update-LTService function. Calling this function through Control is a highly effective way to resolve update failures for Windows agents. LTPoSh (or other solutions) can be used to reinstall the agent using Control, PSExec, or any other method as your disposal. (I have used ESET Remote Administrator to execute an install command for example). update outdated out of date updated current agent version automatic automated internal monitor rawsql
  9. 1 point
    Sorry for the late response, but @Ban-Hammer is correct. Follow the link that he provided or the link under the Patch name in the OP to download it. Patching a CWA server is actually relatively easy, so I'll add the steps below and edit them into the original post. - Download the patch to the CWA server (patch is linked in the OP or if you're on Slack then open up a DM to yourself and type `!patch 19 2` to get an auto-response with the patch link.) - Right click the .exe and select properties - Click the "unblock" option at the bottom and then click ok - Right click the .exe again and select 'Run as Admin' - Allow the prompt to finish and verify that it says the patch installed correctly - Open the Control Center (it should auto update to the latest CC therefore making you initially log in twice - If not then you need to adjust the dashboard settings to enable that) - When The Control Center loads go to Help > About - Verify that the version installed matches the version that you're trying to install (v19.0.58 (Patch 2) in this case) If you have any problems with any of the steps above then reach out to the Slack community and/or CW support. (Try us in Slack first). Please update to Patch 2019.2 as fast as possible because your server AND agents need to be updated soon to avoid any problems in the near future. The patch 12 hotfix would work the same way, but the stability of 2019.2 is solid enough that I would just go ahead and upgrade.
  10. 1 point
    I've attached a PDF I created for my guys when they would say there was a "false positive" for a server offline. I always hated that phrase, O.K., the server is running, but obviously there is some sort of issue that they need to look into, so there's nothing false about it. FalsePositives.pdf
  11. 1 point
    So, there is a way to do this. #Increase the Agentsversion value. UPDATE agentsversion SET `version`=`version`+1 WHERE agentid IN (SELECT AgentID FROM agents WHERE agents.`ComputerID`='44356' AND (agents.`DataOut` LIKE '%VEEAM%' OR agents.`AlertMessage` LIKE 'VEEAM%')); #Flag the monitor as changed, forcing it to rebuild. UPDATE agents SET `changed`=IFNULL(NULLIF(0-ABS(agents.`Changed`),0),-1) WHERE agentid IN (SELECT AgentID FROM agents WHERE agents.`ComputerID`='44356' AND (agents.`DataOut` LIKE '%VEEAM%' OR agents.`AlertMessage` LIKE 'VEEAM%')); The higher value in agentsversion should break the condition where your change is not considered newer than what is already on the remote agent. No other deleting is needed. This post (and what I discovered while investigating) prompted me to create a monitor to automatically fix remote monitor version mismatches. It is posted here: https://www.mspgeek.com/files/file/46-internal-monitor-for-remote-monitor-version-mismatch/
  12. 1 point
    Using your query, you just want to touch a couple of other tables first: #Break the Group Control (Will re-populate) DELETE FROM groupagentscontrol WHERE AgentID IN (SELECT AgentID FROM agents WHERE agents.`ComputerID`='44356' AND (agents.`DataOut` LIKE '%VEEAM%' OR agents.`AlertMessage` LIKE 'VEEAM%')); #Clear the Agent Version history DELETE FROM agentsversion WHERE AgentID IN (SELECT AgentID FROM agents WHERE agents.`ComputerID`='44356' AND (agents.`DataOut` LIKE '%VEEAM%' OR agents.`AlertMessage` LIKE 'VEEAM%')); #Now remove the agent for the remote monitor. DELETE FROM agents WHERE agents.`ComputerID`='44356' AND (agents.`DataOut` LIKE '%VEEAM%' OR agents.`AlertMessage` LIKE 'VEEAM%'); This will force the monitor to get a new AgentID (with no prior agentid version to conflict) and will remove the old agentid since it isn't found in the database. This will also wipe the monitor history and orphan any open tickets (you will have to manually close them), so it's not a perfect solution.
  13. 1 point
    Gavin does this by creating his own fake Anti Virus detection and dumping those files on computers he wants excluded.... @Gavsto
  14. 1 point

    Version 3.2.2

    126 downloads

    This solution will export customizations into a folder hierarchy based on each type of backup. It uses only Automate scripting functions so it is compatible with both Cloud Hosted and On-Prem servers. It should be compatible with MySQL 5.6 and Automate Version 11+. Script Backups will be placed in folders matching the script folders in your environment. Each time a script is exported, the last updated time and user information is included, providing multiple script revisions as it is changed over time. This script does not decode the scriptdata, so script dependencies like EDF's or other scripts will not be bundled in the XML export. But if you are just looking to undo a change, the script dependencies should exist already. Scriptlets will not be "versioned", but it will detect when they have changed and will only back up new or changed Scriptlets. Additionally, the following item types will also be backed up: Internal Monitors, Group Monitors, Remote Monitors, Dataviews, Role Detections, ExtraData Fields, and VirusScanners. The backups will be created at the folder identified by "@BackupRoot@", which you can provide as a script parameter when scheduling if you do not want to use the default path. Target the script against an online agent, and the script data will be backed up to that computer. Future runs will reference the saved "Script State" variable for that agent and will only include the scripts updated since the last successful backup. Backup verification is performed, if a script backup file was not created as expected, the backup timestamp will not be changed allowing the backup to be attempted again. The attached .zip bundle contains scripts actually backed up by this solution. Import the "Send Email" script first, and then import the "Backup" script. If there are any problems or you would rather import a script exported by Automate, the "Backup Automate Scripts (and More).xml" is included as well. You do not need to import all three files! Just schedule this to run daily against any agent to establish your script archive. script version revision archive backup
  15. 1 point
    I believe the new Connectwise Automate File Service (CWAFileService) was added as of v12 patch 10. This is a fun one, especially if it is on hosted servers. I still am not able to reproduce this on any of the servers I manage, hosted or otherwise, but none of them are fresh post-patch 10 installs, so maybe that's why. Either way, if it is on hosted servers, I wouldn't want to be on the CWA Cloud team this week...
  16. 1 point

    Version 3.0

    11 downloads

    This is the "Internal Monitor For Automatic Resets on Ticket Status" (IMFARTS) The tickets themselves aren't sticky, this is a solution for designating specific monitors so that if the ticket is closed without the monitor healing, the alert will be reset and a new ticket will be created. This is specifically for the scenario that exists for Internal Monitors that are configured to "Send Fail After Success". These monitors are useful because they do not continuously generate tickets or ticket comments. But issues can be lost if the ticket is closed without the monitor actually healing. If a monitor alert must be ignored, the agent should be excluded from the monitor instead of just closing the ticket. I am not saying that this should apply to every monitor (sometimes you may accept just closing the ticket). But if you have a monitor that you want to insure is not ignored, this monitor this can help. The monitor works by searching for open alerts and tickets that have been generated by the all monitors. Any alerts found where the ticket has been closed but the alert is still active will be returned as a result. You do not want to alert (create a ticket or other action) on the result of this monitor. The results are only there to show you what alerts are currently not being watched because there is an active alert with no active ticket. Based on this, you can decide which monitors you want to enforce. For monitors that you have chosen to enforce, if they are found to have no active ticket the previous alert will be cleared, allowing the monitor to generate a new alert (and ticket) the next time that monitor is processed. This monitor determines which monitors are being watched by a keyword in the other monitor's definition. To enforce a monitor (so that it's tickets will be re-opened), you need to include the string "ENABLEIMFARTS" in the Additional Condition field. A simple way to do this is to add " AND 'ENABLEIMFARTS'<>'IGNORED' " to the Additional Condition field. This will always evaluate to TRUE, so it will not change the existing monitor's logic. You could also use computers.name NOT LIKE 'ENABLEIMFARTS', etc.. As long as the string is in the Additional Conditions, the monitor will be watched. It can easily be incorporated for regular or RAWSQL monitors. An example: A Drive Monitor is reporting that under 2GB of free space exists for the "C" drive on AgentX. A ticket is created, and the monitor is configured for "Send Fail After Success". A technician accidentally closes the ticket. This Monitor detects that there is an active alert for AgentX on the "C" drive, but all tickets from that alert have been closed. If the string 'ENABLEIMFARTS' is found in that monitor, the current alert for AgentX "C" drive will be cleared. When the Drive Monitor processes next and it still finds that AgentX has an issue with "C", because there are now no active alerts this is treated as a new alert and a new ticket will be created. To use: Import the attached SQL. I have prepared it to be safe for import using SQLYog or Control Center, and if you had added it previously it will safely update your current monitor. Revision History: 2017-09-09 20:00:00 - Version 1 Posted. 2017-10-18 06:00:00 - Version 2. Adds support for ignored alerts (so it ignores that the ticket may be closed) and greatly improves the matches by catching alert tickets with customized alert subjects. 2018-12-11 03:00:00 - Version 3! Indicates when it resets status in the Alert History for any monitor it is acting on.
  17. 1 point

    Version 1.0.4

    377 downloads

    The Internal Monitor "Notify When Agent is Online" watches machines with the "Notify When Online" computer EDF configured. It will send an alert as soon as it finds that the agent is offline. (The offline notice is skipped if the agent was already offline when notifications were first enabled.) When the agent comes online again another alert email will be sent and the EDF will be reset. This monitor can be used to notify when a lost computer comes online, or when that machine that is only online in the office every few weeks is back. To enable notifications for an agent, you simply put your email address into the "Notify When Online" EDF. You can enter multiple addresses separated by ";". The contents of the agent's "Comments" will be included in the email also. (Helpful to remember why you wanted to be alerted, or what instructions should be followed after receiving the alert.) When the agent returns online, the Network Inventory and System Info are refreshed. The recovery email will include the following details: The last check in was at @AgentCheckIn@. Public IP Detected: %RouterAddress% Internal IP: %LocalAddress% System Uptime: %uptime% Last Logged in User: %lastuser% This bundle includes a Script+EDF XML, and a SQL file with the Internal Monitor. To import the Script and EDF, select Tools -> Import -> XML Expansion. After import the script should appear in the "\Autofix Actions" folder. To import the Internal Monitor, select Tools -> Import -> SQL File. The monitor should be imported AFTER the script bundle has already been added. After importing, verify that a valid Alert Template is selected for the monitor. The Alert Template MUST have the "Run Script" action enabled without any script specified in the template. (The script is set on the monitor) Read the Script Notes for advanced control over the number of times a notification will be triggered.
  18. 1 point
    The horribleness that is the new Agent UI Script tile has made Script Log reading a painful experience, but it started a conversation about combining SCRIPT LOG entries when possible. One method would be to defer logging, using a variable to accumulate information and then logging it all at once. Another would be to call your own SQL to append information into an existing entry. I believe this script is superior to both methods. With this script you would use the SCRIPT LOG step as normal. Each time it is used, a new entry will be recorded as normal, no special treatment of logging is needed. At the end of your script you just call this function script;. This script will combine all the script log lines recorded on this computer under the current script id, since the start of this script into 1 entry and delete the other entries. The combined result will also be returned in a variable named "@SCRIPTLOGS@" in case you wanted to email them, attach them to a ticket, etc. Download Here: FUNCTION - Consolidate Script Log Entries.zip Here is an example before the script is called, with the individual SCRIPT LOG entries. Here is the Script Log entry after running. Thank you @johnduprey for the work you did on this idea, which inspired me to create and share this! combine merge join bundle multiple script log logs logging message messages entries scriptlog scriptlogs entry
  19. 0 points
    https://www.webroot.com/us/en/about/press-room/releases/carbonite-to-acquire-webroot This is horrible news. We used AVG when they were the darling of the industry many years ago. They got big and downhill the antivirus went. We switched to Vipre when they were the darling of the industry. They were acquired by GFI and went downhill afterwards. We switched to Webroot and have been happy ...until now. I have zero hope that Webroot will fare any different after this acquisition than AVG or Vipre did. On top of that ...Carbonite??? Really? I can't believe they are still in business in the day of super cheap and much better online backup choices, Carbonite is as outdated at floppy disks. I would have been less surprised if Webroot acquired Carbonite. Anyone with recommendations for where we go next when Webroot starts to suck?
×