Jump to content


Popular Content

Showing content with the highest reputation on 02/07/19 in all areas

  1. 3 points
    See the recording for this cast, start at around 12 minutes in order to skip the countdown until 8:30 https://www.youtube.com/watch?v=Lv5ZVAJIFRk&t=1s There was some confusion as I quickly ballooned from simple Searches and Groups to a Remote Monitor that changed into a complex Powershell script that then turned into a confusing State Based monitor. I'll break it down here for people who want a quick review without watching the entire show. The goal: Determine which Virtual Machines are supposed to be running on the Host that are not actually running right now, and start them or make a ticket if they fail to start. To identify these virtual machines I have to make sure I find machines that are either A) Not a replica B) If it is a replica, its the PRIMARY Replica. Additionally I have to confirm that the machine is supposed to be running. I use two methods to do this 1) I confirm that the VHD(x) was recently written to, and also confirm that the VM Settings itself is configured to start automatically. I use the following Powershell Modules to determine what I listed above.. Get-VM Get-VMReplication Get-VMHardDiskDrive Get-ChildItem I use the following PowerShell parsing methods and logical processing to properly handle the data received from the above modules. where For-Each If () { } else {} My first step is to get a list of all Virtual Machines that are NOT in a running state. Specifically because there are other states (Starting, Shutting down) I limit my search to those that are either Saved or Off. Get-VM | where {$_.State -in "Off","Saved"} I'm using the "-in" logical comparison to match the value of the VM State retrieved from the Get-VM command to either Off or Saved. This will return any VM that isn't running. This is the base check. Next we want to check for machines that are meant to be on. This is just an additional condition of looking for Virtual Machines that are configured to start automatically. Get-VM | where {$_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing"} Using "-and" and "-ne" (not equals) I'm able to string a second check on the same VM object to exclude anything that is configured to DO NOTHING on reboot. My next step is to get the VM Replication status, using the powershell module Get-VMReplication I will be returned with a list of all Virtual Machines that are replicas.This is similar to the Get-VM module except that I'm specifically getting ONLY Replicas. Using similar logic to above I know that I only want to get Virtual Machine replicas where the Primary Server is the same as the server that the name itself is running under. Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)} If you look at Get-VMReplication you'll see a property "PrimaryServer" that has the full FQDN of the servername. The only way to get the FQDN in Powershell without using WMI is by tapping into .NET (which is faster) and that is the second part of the command. Combining addressing the DNS Namespace in .NET to retrieve the FQDN we combine it with the environment variable computername to generate the same string that would match within the PrimaryServer attribute. We now have a full list of VMs that exist on the VM as the primary replica copy. Combining the first two modules together so we can get a single list of Virtual Machines the command will look like this. Get-VM | where {($_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing") -and ($_.Name -in (Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName)} Note how we created separate groups in the logic, because two statements are AND but the second statement (the replica primary server) is going to be an OR against a final check -if Replica is even enabled at all. The final piece to point out is how I'm specifically selecting a specific property from the output by doing ".VMName" which is the value I want to compare against the $_.Name from Get-VM. Adding in the final and last condition check to confirm that we're getting NON REPLICA Vm's as well as Replica VMs where the primary server is itself I'm going to adjust the code as follows. Get-VM | where {($_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing") -and ($_.Name -in (Get-VMReplication|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName -or $_.ReplicationState -eq "Disabled")} We now have a list of VM's that are Off/Not Running, Configured to turn on automatically, will run on this host if it was on however we still don't know if the VM was recently used. Using Get-ChildItem and Get-VMHardDiskDrive I can pull out the path to the first VHD on the VM (which will always be the boot disk) and check the last write time. However note, that this can take some time and I only want to do this for the Virtual Machines that we know are supposed to be on. This means we need to create an If statement to see IF results are returned then we'll check for the Last Write TIme. #Create variable containing all the Virtual Machines. $VirtualMachines = Get-VM | where {($_.State -in "Off","Saved" -and $_.AutomaticStartAction -ne "Nothing") -and ($_.Name -in (Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName)} #Create If Statement and check if the variable is NULL if ($null -ne $VirtualMachines) { #Loop through the virtual machines and check last write time $VirtualMachines|foreach { #create the actual check for the last write time, follow along as we nest additional modules within the IF Check if ( (Get-ChildItem (Get-VMHardDiskDrive -VMName $_.Name -ControllerLocation 0).Path).LastWriteTime -gt (Get-Date).AddDays(-2) ) { #Start VM Start-VM $_ #Output Hostnames or VM Object $_ } #end If statement for last write time } #end Loop statement } # End If statement for Null variable, create else statement that no issues found. else {Write-Host "No issues detected."} The above script has been written in long hand with lots of comments to indicate what the script is doing. As you can see we're still sticking to the basics explained in previous examples of pulling information as we just overlay commands over with themselves to do more complex comparisons and matching. The above script will do everything we need, you can remove the Start-VM line and just have it echo out the results or you can remove the $_ line by itself to remove the output from occurring in the event the machines start successfully. Keep in mind about the Escape Characters, when using the above script in Automate Remote Monitor you will need it to be one line, and will need to call it in by executing the poewrshell interpreter directly. The full command to use in the remote monitor is as follows. %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe -noprofile -command " & {$a = get-vm|? {($_.State -in \"Saved\",\"Off\" -and $_.AutomaticStartAction -ne \"Nothing\") -and ($_.ReplicationState -eq \"Disabled\" -or ($_.Name -in (Get-VMReplication -Mode Primary|?{$_.PrimaryServer -eq ([System.Net.Dns]::GetHostByName($env:computerName).HostName)}).VMName))}; if ($a){$a|% {if ((gci (Get-VMHardDiskDrive -VMName $_.Name -ControllerLocation 0).Path).LastWriteTime -gt (Get-Date).AddDays(-2)){Start-VM $_} }}else {Write-Host \"No issues detected.\"} }" Using ";" to indicate line breaks to allow multi line scripts to be executed on a single line. Regarding the remote monitor itself you can set it to be State Based by following the below screenshot. The details of the state based conditions are covered more inside the video, and will not be covered in detail here. This post was just to cover the areas that I "rushed" through so that I could focus on Automate. Please hit me up if you have any questions.
  2. 2 points
    Huntress Labs' recording of the CISA Awareness Briefing on Chinese Malicious Cyber Activity, which details a state-sponsored threat to MSPs specifically, and also makes mitigation recommendations: https://huntress-public.s3.amazonaws.com/DHS_China_Webinar.mp4
  3. 1 point
    Following the MSPs who were impacted by this https://www.reddit.com/r/msp/comments/ani14t/local_msp_got_hacked_and_all_clients_cryptolocked/ a number MSPGeekers had an impromptu call to discuss security in general and what best practices we all followed to ensure our systems are as secured as possible. This prompted an idea from @MetaMSP that we have a place where best practices can be defined - things that we can put in place to make our RMMs as secure as possible. I will update this with a list of generally agreed upon methods based on the discussion. How can I apply better security? 1) Enable Multi-Factor Authentication. This is a functionality that already exists within Automate in the form of plugins, and for the effort to implement it gives a massive boost to security. As an MSP every single account you have should have 2FA on it 2) Do not publish your Automate URL publicly - anywhere. If you are linking to your Automate site, or even your Control site from anywhere on your website - remove it and ensure to the best of your ability it is removed from search engine indexes. Attackers can find servers like this on Google using very simple methods and you will be one of the first they attempt to attack. 3) Review all plugins/extensions installed and disable, remove the ones you no longer use. Having an added benefit of speeding your system up, each of these adds a small risk profile as you are relying on third party code being secure running in the background. Removing plugins you no longer use or need reduces the surface area of attack. 4) Review ports you have open and close ports that are not needed. You will find the ConnectWise documentation here on what ports should be open. https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/020/010/020 . Don't just assume this is right - check. Ports like 3306 (MySQL DB Port) and 12413 (File Redirector Service) should absolutely not be opened up externally. 5) Keep your Automate up to date. ConnectWise are constantly fixing security issues that are reported to them. You may think you are safe on that "old" version of Automate/LabTech, but in reality you are sitting on an out-of-date piece of software that is ripe for someone attacking 6) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP 7) DO ensure that robots.txt is properly set on your Automate server. If you can Google your Automate server's hostname and get a result, this is BROKEN and should be fixed ASAP. <-- Courtesy of @MetaMSP 8 ) Firewall Blocking. I personally block every country other than the UK and the USA from my Automate Server on our external firewall. This greatly reduces your chance of being attacked out of places like China, Korea, Russia etc. 9) Frequently review the following at your MSP Check that the username and passwords set are secure, better yet randomise them all and use a password manager Treat vendors/services that don't support or allow 2FA with extreme prejudice. I will happily drop vendors/services that don't support this. If you 100% still need to keep them, setup a periodic review and pressure them secure their systems because you can almost guarantee if they are not doing customer logins properly that there will be other issues Setup a periodic review to audit users that are active on all systems. PSA, Office365, RMM, Documentation Systems (ITGlue/IT Boost) Audit 3rd Party Access, Consultants and Vendor access to your systems <-- Thanks @SteveIT 10) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP
  4. 1 point
    Hey everyone! My name is Ian, I work on the Product team at Connectwise. I'm (obviously) not active on the forums, but you can find me in slack under igriesdorn if you'd like to chat. Wanted to talk about KI 11109583, which is the /Automate web throwing an error when a limited user clicks on the Scripts button. We identified an issue where if a partner has rows in their lt_scripts table that have blank values for Permissions, and EditPermissions, it is what is causing this error for the non super admin accounts. Running the following query will identify any scripts that are both 1) Blank, and/or 2) Not properly terminated: SELECT * FROM lt_scripts WHERE Permission = '' OR Permission NOT LIKE '%,%' Partners can then go to those scripts in the product, navigate to the Permissions tab, and set user class permissions to limit as they wish, then save the script. If they add a user class to a blank window, and remove it, it will add in "0," to the two columns in the database. While manual database manipulation is not recommended by me, if you want, you can update the values manually to 0, We have a 20 count of partners on this KI right now, and it's mostly MSPGeek (had to delete and correct that name 😅) on the list, so I thought it would be good to reach out to you all on the forums and through Slack to see if we can get the information propagated out in the community. We will be putting in a fix in the product that will prevent your /Automate web page's script button from throwing the error. Thanks, Ian Griesdorn EDIT: I forgot to follow up with this, I apologize. This fix has passed through the QA process, and is slated for release in 2019 Patch 4. I appreciate the positive feedback, and look forward to working with you all through Slack and on here in the future!
  5. 1 point
    Here's a selection from my daily SQL maintenance script: # Automatically classify drives as SSDs UPDATE drives SET SSD='1' WHERE ( Model LIKE '%SSD%' OR Model LIKE '%NVMe%' OR Model LIKE 'SAMSUNG MZ%' OR Model LIKE 'SAMSUN MZ%' OR Model LIKE 'SanDisk SD%' OR Model LIKE 'SK hynix%' OR Model LIKE '%LITEONIT LMT-256%' OR Model LIKE '%LITEONIT LCS-128%' OR Model LIKE '%LITEON LCH-128%' OR Model LIKE '%LITEON L8H-256%' OR Model LIKE 'MTFDDAK%' OR Model LIKE 'PLEXTOR PX%' OR Model LIKE '%Optane%' OR Model LIKE 'SD Card' OR Model LIKE '%SD-CARD%' OR Model LIKE '%SDXC Card%' OR Model LIKE '%Flash Drive%' OR Model LIKE '%Flash Disk%' OR Model LIKE '%Flash Memory%' OR Model LIKE 'hp v150w USB Device' OR Model LIKE 'SanDisk Cruzer%' OR Model LIKE 'SanDisk U3 Cruzer%' OR Model LIKE '%USB 2.0 FD%' OR Model LIKE '%Card Reader%' OR Model LIKE '%Card Reader%' ) AND SSD <> '1'; UPDATE drives SET INTERNAL='1' WHERE ( Model LIKE 'SAMSUN MZ%' OR VolumeName LIKE 'Windows' ) AND INTERNAL <> '1'; UPDATE drives SET INTERNAL='0' WHERE ( VolumeName LIKE 'HP_TOOLS' OR VolumeName LIKE 'HP_RECOVERY' OR Model LIKE '%USB%' OR SmartStatus LIKE 'USB%' ) AND INTERNAL <> '0'; # Drop drives marked as "missing" from the database. DELETE FROM drives WHERE missing=1; # Reset hotfix status and re-allow push UPDATE hotfix SET pushed='0' WHERE installed=0 AND pushed=1 AND approved=2; # Cleanup h_scripts table DELETE FROM `h_scripts` WHERE HistoryDate < DATE_SUB(NOW(), INTERVAL ( SELECT VALUE FROM properties WHERE NAME = 'RetentionHistoryScriptLogs') DAY); OPTIMIZE TABLE h_scripts; # Delete all ticket data older than 30 days DELETE FROM tickets WHERE updateDate < NOW() - INTERVAL 30 DAY; OPTIMIZE TABLE tickets; DELETE FROM ticketdata WHERE ticketid NOT IN (SELECT ticketid FROM tickets); OPTIMIZE TABLE ticketdata;
  6. 1 point
    A few suggestions from a self-education and awareness aspect: Have a look at https://www.sans.org/reading-room/whitepapers/compliance/compliance-primer-professionals-33538 - it's nearly 10 years old but is still a good overview of still-applicable regulations and standards. Better yet, register at https://www.sans.org [free] and get access to behind-the-authwall whitepapers similar (and often more recent) than the above. Head over to https://www.us-cert.gov/ncas/alerts and sign up to get regular emails or subscribe to the RSS feed to stay current with impactful vulnerabilities. Layer, layer, layer. No mitigation measure is 100% effective, and to quote @DarrenWhite99: That said, however, if we work together as a community to harden the platforms and tools we use as a group, we might just make Automate less of a target overall.
  7. 1 point
    Thanks for this Ian. Identifying things like this and releasing workarounds to the community make all the difference rather than waiting months for it to come through in a patch, so I just wanted to take the time to say thanks and if more things like this can be done for known issues, it will change the perception of 'ticket logged, see something in a few months hopefully'. So once again, thanks. Very much appreciated.
  8. 1 point
    I completely agree, which is precisely why there should be a proper structure in place for reporting security vulnerabilities. Losing vulnerabilities because a support ticket got closed because a partner didn't respond is serious amateur hour stuff. This is also the second time I know of it has happened (one of my privately reported ones got lost in the same way, mostly because the initial support engineer could not comprehend what I was trying to raise). I implore ConnectWise to put a proper procedure in place for reporting security vulnerabilities allowing for responsible disclosure. In the mean time at least train the existing staff to escalate anything like this immediately to the appropriate resource.
  9. 1 point
    @GeekOfTheSouth , I apologize that your report was not properly escalated. We were able to track your original ticket down and it appears that it was closed waiting on response. The ticket you opened on Thursday has been escalated to T3 support. We are going to pull that directly into development. We take security reports seriously, and I want to make sure they are escalated to development to be handled as soon as possible. On the issue that you are reporting: Our documentation is in error. The file reporting service is purely meant for communication on a single server via localhost or in a split web server installation between the web servers hosted in a DMZ and the Automation Server. In both cases the IIS worker processes communicate via this port to download/upload files from the Automation server. At no time should port 12413/TCP be opened to any other systems just as we recommended that MySQL 3306/TCP is also closed. We have requested that documentation be changed be to avoid other partners configuring their servers in this way. We have verified with our cloud team that instances maintained or created by them have 12413/TCP firewalled. We have verified that implementation documentation does not list 12413/TCP as a required open port for Automate servers. Our architecture team has reviewed the traversal behavior and we are working to address that issue in an upcoming patch. We are also going to separately assess the file service communication to increase security between Web Servers in a DMZ environment and the Automate server for further enhancements. If anyone has open access to 12413/TCP configured to their Automate server, we recommend that it be closed as soon as possible. We are assessing our options internally to identify partners that may have their servers with this port open so we can reach out to them directly. While we failed to get the original ticket escalated due to issues reproducing the problem, Thursday’s ticket was moving through the proper escalation path. Development relies on the reproduction steps generated by T3 as an important requirement to quickly analyze and solve issues. If you feel you are not getting a response please touch base with your Account or Support Manager and they will directly escalate issues to product so we can look into the ticket to get the reproduction steps we need. We also ask that before publically disclosing potential vulnerabilities that you consider the impact on the Automate community of a zero day disclosure.
  10. 1 point
    Some points: Good thought on the service account user. From my understanding of the stated/known purpose of the service that shouldn't break it. Putting LTShare on a separate volume is another good mitigation. Through observation I can say that the service monitors for file changes (It seems to catch file create actions, notably it MISSES when I modify a file). I don't specifically know what process might trigger it to scan folders. Why is the server in the same layer2 segment as your LAN? Putting it in a separate subnet/vlan would permit you to apply firewall rules at the L3 boundary. The port is only intended for access by IIS. Even if the server is on your LAN subnet, if you only have one server (not split) then you can use Windows Firewall to permit access from the localhost and block access from all other IPs. (And even if you are split, you can permit access from only the Automate server front ends and block everything else)
  11. 1 point
    Meh.. I thought this would be harder to track down given the brainpower in this thread.... start http://localhost:12413/FINDME.txt Oh Process Monitor... Show me file system accesses containing "FINDME" And what is LabTech\FileService.exe? It's the new Connectwise Automate File Service (CWAFileService). Once I knew where it was targeting, I didn't need to guess how to ask for a file: I was able to access http://localhost:12413/..../Windows/LTsvc/LTErrors.txt and retrieve the file. So, it does permit directory traversal, although that is basically what it is for. My understanding is that it is used for split server environments to allow the servers to replicate/retrieve files between themselves. As far as I know, it shouldn't be used/accessed by anything external. If someone has that port publicly accessible, then that would definitely not be good. This part bears repeating. IF ANYONE HAS THIS PORT PUBLICLY ACCESSIBLE IT IS NOT GOOD! I really hope that I don't need to explain myself.
  12. 1 point
    We were having trouble managing workstations, especially laptops, because they were going offline overnight. This monitor/autofix setup has drastically improved the situation. Components: Install and Apply Power Plan [function script] This creates and runs a powershell script to download a .pow file, install the power plan, and apply it. This assumes that @powerplanFileSource@ has been defined and points to a .pow file in the LTShare transfer folder. So if your powerplan file is \LTShare\Transfers\PowerPlans\nosleep.pow, you will have defined powerPlanFileSource = PowerPlans\nosleep.pow This sets a variable @installAndApplyPowerPlanResult@ = success upon success, so you can check the result after calling it. Apply Power Plan [function script] This creates and runs a powershell script to apply an already installed power plan This assumes that @powerPlanName@ has been defined and is the power plan it should apply to the computer This sets a variable @applyPowerPlanResult@ = success upon success, so you can check the result after calling it. Apply [YOUR POWER PLAN NAME] [script] This script conditionally runs the two function scripts above. You set the required variables in lines 2 and 3, and it will check to see if the plan is installed or not and act accordingly. This sets a variable @autofixResult@ = success upon success, so you can check it after calling it. ~Autofix incorrect power plan [script] This is an autofix script to be called by a monitor. If called, it will run the Apply [YOUR POWER PLAN NAME] script. If the script is successful, we're fine. If the script fails, it will create a ticket with subject and body defined by lines 2 and 3 of the Then section, and if the monitor succeeds it will close the ticket with the note defined by line 2 of the Else section. On Incorrect Power Plan [monitor] This is a RAWSQL monitor that fails if your power plan isn't applied, and will be configured to use an alert template executing ~Autofix incorreect power plan. Configuration Create your power plan On a laptop, set up the desired power configuration, including lid actions. Save it with a name you want your clients to see if they go looking at their power plan. Get the GUID of your power plan with the powershell command powercfg /List Export the power plan to a .pow file with the powershell command powercfg -export "%UserProfile%\Desktop\MyPowerPlan.pow" GUID (GUID is the GUID from the previous step) Move MyPowerPlan.pow somewhere in your LTShare\Transfer Import the attached files into Automate Modify the Apply [YOUR POWER PLAN NAME] script Rename it and change the Notes section as needed Set lines 2 and 3 to the correct values for the power plan you created and the file you exported Ensure line 24 runs the "Install and Apply Power Plan script Ensure line 34 runs the "Apply Power Plan script Modify the ~Autofix incorrect power plan script Set lines 2 and 3 of the Then section and line 2 of the Else section as desired Ensure line 13 points to the Apply [YOUR POWER PLAN NAME] script Modify the On Incorrect Power Plan monitor In Configuration>Additional Condition, change pp.currentPlan != "[YOUR POWER PLAN NAME]" so it references the name of the power plan you created in step 1 (no brackets) In Configuration>Additional Condition, change WHERE AgentID=[YOUR MONINTOR ID] with the monitor id (this is set upon import) Create an alert template Go to Automation>Templates>Alert Templates (assuming automate 12) Click on New Template Name it as you like Add an alert to run the ~Autofix incorrect power plan script, applied every day all day Now it's just a vanilla monitor setup where you enable the monitor for whatever groups you want (e.g. Patching.Patch Install - Workstations, Service Plans.Windows Workstations.Managed 24x7) and set it to use the alert template you created in step 6. -rgg *thanks to @Gavsto for his rawsql writeup. It's so good I just open it by default every time I'm starting a RAWSQL monitor. ~Autofix incorrect power plan.xml Apply [YOUR POWER PLAN NAME].xml Apply Power Plan.xml incorrect_powerplan_monitor.sql Install and Apply Power Plan.xml
  • Create New...