Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

MetaMSP last won the day on July 10 2019

MetaMSP had the most liked content!

Community Reputation

19 Good

1 Follower

My Information

  • Agent Count


Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. MetaMSP


    Portscan make benefit glorious nation of Kazakhstan?
  2. You'll want the amazing work from Darren White at: Good luck!
  3. Ping me in Slack about this, I'll be happy to help.
  4. You DID backup the table before modifying it, right? 🤣 I think you can bring them back by forcing a "Reload Solution Center Client" from the menu on the home screen of the Solution Center itself; that forces a rescan of all objects in the environment.
  5. Groups are exactly the preferred, best-practice tool to accomplish your desired results. Groups not only can apply monitors, but also schedule scripts, apply templates, and many other things. They are the central cog around which all other Automate modules rotate. The learning curve for Automate is steep and time-dependent. If you are under the gun, I highly suggest engaging an experienced consultant to get you up and running quickly. CW has consulting and training departments, and there are a number of independent consultants that can work with you (disclaimer: I am one). Good luck and welcome aboard - don't be afraid to dive in, the water's fine
  6. https://github.com/ManagedITStack/labtech_decode_scriptxml
  7. I had problems running the extension installer against anything older than Control v6.9.
  8. We warned ya lol - but good writeup! Hope everything you have to submit doesn't get closed as environmental though
  9. That's how important it is; it's worth repeating.
  10. Huntress Labs' recording of the CISA Awareness Briefing on Chinese Malicious Cyber Activity, which details a state-sponsored threat to MSPs specifically, and also makes mitigation recommendations: https://huntress-public.s3.amazonaws.com/DHS_China_Webinar.mp4
  11. Going to try and inject a bit more activity in this thread, although what I am doing is merely documenting what others have suggested so far: DO ensure that robots.txt is properly set on your Automate server. If you can Google your automate server's hostname and get a result, this is BROKEN and should be fixed ASAP. DON'T publish your Automate server's FQDN. This includes linking to it from your company's website. That's like a flashing signpost that says "THIS WAY TO HACK ME!" DON'T allow customers downstream access to your Automate server. This exponentially increases your attack surface area as every customer must then be as secure as you must. DO enable multifactor authentication / MFA / 2FA on All The Things. Seriously. Single factor (username/password) authentication is not sufficient in 2019. DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). DO disable AND REMOVE and unused or unneeded plugins from your Automate environment. Each is another potential attack vector that you are eliminating by removing. DON'T expose port 3306 to your external network interface / the Internet. This is MySQL's default port and is yet another attack vector that you can easily eliminate. More to follow as it is suggested.
  12. A few suggestions from a self-education and awareness aspect: Have a look at https://www.sans.org/reading-room/whitepapers/compliance/compliance-primer-professionals-33538 - it's nearly 10 years old but is still a good overview of still-applicable regulations and standards. Better yet, register at https://www.sans.org [free] and get access to behind-the-authwall whitepapers similar (and often more recent) than the above. Head over to https://www.us-cert.gov/ncas/alerts and sign up to get regular emails or subscribe to the RSS feed to stay current with impactful vulnerabilities. Layer, layer, layer. No mitigation measure is 100% effective, and to quote @DarrenWhite99: That said, however, if we work together as a community to harden the platforms and tools we use as a group, we might just make Automate less of a target overall.
  13. The answer you received is essentially correct. The Automate Patch Manager acts as a front end for the Windows Update API, which is exposed by the Windows Update Agent (WUA), which in turn is a Windows component that resides on the managed endpoint. The WUA is the behind-the-scenes Windows Component that the Windows Updates Control Panel in Windows interfaces with. The patches that the Automate Patch Manager displays are the aggregated cumulative results that Automate discovers by querying each endpoint's WUA with two questions: What patches are already installed on this machine? What patches are available for this machine? Automate's business logic massages the data and presents it in a centralized, mass-manageable form, but without sourcing that data from the WUA, it's no more than a computerized Jon Snow - it knows nothing. Thus, your options are either: Whitelist all the required Windows Update internet sources in your ACLs - https://social.technet.microsoft.com/Forums/en-US/90843d78-47a0-4136-9c1f-d5450ea8cd80/need-windows-update-servers-ip-address-range-to-allow-in-firewall?forum=systemcenterupdates has an unofficial list provided by an official Microsoft resource, Continue to use WSUS to, at a minimum, source the patches, and bear the pain of integrating Automate with WSUS, or Drink the Micro$oft KoolAid and implement Microsoft SCCM alongside Microsoft WSUS for truly managed WSUS patching. Good luck!
  14. Posting this here in case it helps others, info sourced from CWSupport: The problem: LTShare was not designed for Split Server Environments. If you have a 3 way split, with a dedicated Web Server and Automation Server, they each end up with their own LTShare. There is nothing to synchronize the content share folders together. A file upload would end up on the Web Server, but a Script running in the Automate Server would then try to access the file (but try to access it on the Automation Server) and it would be unable to find it. The solution: The File Service was written to solve this need. It is a new program that is setup to run as a Window Service on the Automation Server (eg. Alongside the database agent). This is heavily an under the hood system change. There is no UI impacting changes that would require new screen shots. File Service: When something (eg. a Remote Agent) tries to download a file from the system, it will request the file from the Web Server. The Web Server will first check its local LTShare directory for said file, if the file is outdate or does not exist it will communicate with the File Service to get a current copy of the file. The Web Server will then stream the content of that file back to the entity requesting the download, while simultaneously saving a copy to the LTShare directory on the Web Server. This local copy can then be used to server future requests for that file that come to that Web Server. The Web Server communicates with the File Service on port 12413. So in split environments this port will need to be open between the Web Server and the Automation Server. If the FileService is not running for whatever reason the download of a file would fail. Along with the other changes to the LTShare, there will no longer be a need to have the LTShare mapped to workstations running the Control Center. When a user logs in and the Control Center starts, it will run through like it currently does. During and after the Control Center launch it will download the files that used to be pulled from the LTShare and place them on the machine running the Control Center. These files include the MIBS and legacy reports (Crystal Reports). Screenshots will now be downloaded by the Control Center as needed when loading Computer Management Screens. Some additional details that may be useful for support: There will be a FileService created on the Application server which will keep a new table named "PrivateFileShare" which will contain the File information including FileHash and Version of all the files in the LTShare of the Application Server. This information will be gathered by a file on the LTShare called .Folder which will contain details for the state of these files. The service will also allow the communication of the Web servers to the LTShare of the application server. When one of the Web servers receives a request for one of the files, the Web server will check it's LTShare to see if the file is there, if the file is there, it will check the "PrivateFileShare" table to make sure that the file is the same as the one on the App server. If it isn't it will copy the file from the App server and overwrite the one on the Web server. In the case that the web server needs to get a file from the File Service, it will immediately serve that file for download by streaming the data from the File Service to the Web Server and then off to whatever is requesting the download, and at the same time the Web Server will store a local copy of that file so that when it is requested again it can be served for download directly by the Web Server without needing to stream the file again from the File Service.
  15. This configuration is absolutely not officially supported, and if you ever plan on raising a ticket with CWSupport ever again, do not do this. The out of box configuration has zero redundancy. No load balancing, no replication, nothing really that isn't a single point of failure. Split server config is relatively new to the LT platform and allows for, at most, a three-way split between database, IIS, and automation server. The official docs required to do this are guarded closer than the recipe for Coca-Cola, or perhaps Kentucky Fried Chicken's secret blend of herbs and spices. If you have some advanced DBA knowledge, you may be able to set up binlogging / master-slave replication, but don't even try to enter a support ticket if you do. Final notes - probably would be better using some type of HA virtualization setup that would otherwise be transparent to LT. vMotion comes to mind. Good luck!
  • Create New...