Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

KKerezman last won the day on January 27

KKerezman had the most liked content!

Community Reputation

8 Neutral

My Information

  • Location
    Hillsboro OR USA
  • Agent Count
    2000 - 3000 Agents


Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. (Apologies for the necroposting.) I'm glad I'm not alone in the frustration with trying to script shell commands on Mac agents. I'm getting a lot of "OK" or even weirder results. For instance, I'm trying to run a 'date' command (I need to populate a variable with "today's" date in a particular string format so I can then parse a log file looking for that string match) and yes, 'date' takes variables in %X format which Automate needs escaped by doubling up the % signs, great, fine. So why does the Shell step of "date '+%%m/%%d/%%y'" give me a %shellresult% of "%m/%d/%y"? Argh. I can run the command (minus the extra % signs) in Terminal on a Mac endpoint and get the expected/desired outcome. I'm not asking for anything terribly complicated. I'm starting to feel like CW just kind of stopped all Mac-related development shortly after buying LabTech and I should not bother trying.
  2. Yikes... I don't think ticket comments in Automate have any awareness of the idea of internal notes on the Manage side, I'm afraid. I'd be glad to be proved wrong though.
  3. Hello, @KyotoUK, hopefully what we've done may be of some use. Here's the rough breakdown of the script we use. (And we're using an EDF to track if we have or haven't added/updated the admin account.) Shell 'net user [username]', IF %shellresult% contains [username] then jump to part of script where we just use a batch file to update the password. IF %shellresult% doesn't, we need to create a new account. New account - Use the 'add user' script step, then a shell command for 'net localgroup Administrators [username] /add', and to make the password not expire we run a 'WMIC USERACCOUNT WHERE (Name='[username]' and Domain='%computername%') SET PasswordExpires=FALSE', set the tracking EDF to 1, then exit the script. Update password instead - Download batch file which basically just runs 'net user' to set the password. We use that WMIC command again just to reinforce the 'no expire' part. Then delete the batch file and set the tracking EDF to 1. Why the batch file? Because a 'net user' command and its results show up in the Commands list on a given machine, so folks who have access to Commands logs but shouldn't know the special password will get to see it. (Oh yes, use ECHO OFF in that batch file.) The EDF is just a toggle to say "yes, the special admin account has been taken care of." Then you can use its 0 or 1 status to do a search that populates a group. That group can be set to run the add-or-update-admin-user script every couple of hours. Once the script runs, the EDF/search will take care of removing the endpoint from the group.
  4. Chiming in real quick to say: We ended up abandoning the Sharepoint ISO hosting after one too many "file, what file?" situations. Since we're paying for a Wasabi storage account, we made a bucket with a public read-only ISO in it instead, that seems to be working much, much better. Also the Win10 mount trick @SteveYates described is working a treat. Nice one!
  5. I'm glad I found this post. I just finished fiddling with my copy of Drive Details to create Drive History - 90 Days, with less fiddly data I don't want (why is there output for the drive letter when the header for each section is the drive letter?) and a nice wide expanded graph per drive for better visibility, plus actual sorting the drives in letter order (the original Drive Details was listing our fileserver's drives in order D, C, F, then E). I will never enjoy bashing around in DevExpress but hey, whatever gets the job done.
  6. Jay, I can totally do that. You just need to prep a data field under Computers called something like "Windows 10 Version" (put it into whatever folder you like) and then schedule a periodic run of a data collection script. Our script is a bit brute force, really: Powershell command '(Get-ItemProperty -Path ‘HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion’).CurrentBuild' and a series of IF checks (if powershellresult not = 10240 THEN jump to Not1507, set Windows 10 Version EDF to 1507) and so forth. Twice a year we have to expand the script to accommodate the new release ID number. You'll need the list of what "currentbuild" strings correspond to which release ID, that's available on Wikipedia and such. Steve, that's pretty slick. I'd forgotten about Win10's ability to mount ISOs without 3rd party utils. I'll have to look into bringing some of that into our updater system.
  7. I was today years old when I learned there's a REST API inside each ImageManager install. Sweet! Now, why ShadowControl is claiming our 7.5.6 installs are "current" when clearly there've been a couple of significant releases in the meantime is anyone's guess...
  8. Chiming in on this because I just dealt with trying a non-working method to do this and followed up with an actually-working (so far) method. At first I tried setting up an Event Log Remote Monitor for the group in question (long story short, I want to update Bitlocker status with a script after reboots so I know if people have used the "suspend" command incorrectly pre-reboot), but while the monitor applied to each machine it refused to do anything. The solution? Copied one of the built-in "EV - " Internal Monitors, specifically the BlackListed Events - Symantec Endpoint Protection one, and replaced its Additional Condition with: eventlogs.`Source` = 'EventLog' and eventlogs.eventid=6009 And set it to Once Per Day mode, hourly checks. (I don't need the script to run more than once a day on any given machine, really.) I also had to add an Event Blacklist (Dashboard -> Config -> Configurations -> Event Blacklist) entry for ID 6009, source EventLog, Log Name of System, Event Type of Information, with Message and Category left as % wildcards. With that, and Enabling this new Internal Monitor in my Bitlocker specific Group set to fire off the Alert Template that runs my detection script, I seem to be in business. Make of all this what you will! Maybe there's a slightly more efficient way? I'm all ears.
  9. I had a nice hour-long (or so) talk with Mister Goodwin, the OP, a few days ago. If you want to chew the fat at some point I'm game!
  10. I was a 12-year veteran of Kaseya when we pulled the trigger on the Automate migration. Now that I'm about 10 months in, I have Thoughts. (Most of them good, some very annoyed, I suppose.) I can probably carve out some time to talk about how it went, what we like, what we miss, etc.
  11. You're welcome! Our EDF is populated by a script which runs this Powershell command: (Get-ItemProperty -Path ‘HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion’).CurrentBuild Then passes the results through a series of IF checks along the lines of "If @powershellresult@ NOT = 10240 THEN Jump To :Not1507, otherwise set EDF to 1507 and exit" and so forth. We just tack a new IF check on twice per year. The script gets run weekly against online Win10 endpoints to keep the EDF updated. This lets us do groupings and reports and so on.
  12. (...and then find a way to hack the UI to turn the online/offline flag indicator a different color, say blue for instance, *coughKaseyacough*) 😑
  13. Your mileage may almost certainly vary but for the sake of hopefully pointing you in a useful direction, I'll describe our current system. We imported and updated our old kludgey way of doing it from Kaseya (we migrated after the end of last calendar year). The script goes a bit like this: Check for and deploy 7-Zip if needed. (The standalone executable can't do ISOs.) Download the ISO. (We're storing it at... ugh... SharePoint at the moment. Hey, it works.) Create working folder and extract ISO into folder. Make sure the setup.exe is actually present and bail with a detailed email to the tech if it isn't, since clearly something went horribly awry with the preceding steps. Pop up a "please don't turn off your system, you asked for this" message. Run 'path\to\setup.exe /auto upgrade /ShowOOBE none /quiet' And yes, there's a cleanup script to be run afterward. If the tech has a ticket to be doing this at all, they should be tidying up as part of that ticket. As to how we deal with feature updates to Win10 logistically, it's a matter of looking through the search we built looking for likely candidates, then coordinating time with the clients for access to the relevant machines. (The search is centered on an EDF which we populate with the build number.) It may not be fully automated but so far it's been working moderately well.
  14. We migrated ~2500 endpoints from Kaseya to Automate over the start of the year going into mid-springtime. I have a few thoughts, which may or may not match up with anyone else's, so... grain of salt, I guess. Make the heck sure that you have consultation hours banked with ConnectWise. You will have questions, you want a consultant available who can get you answers. We did this and it's 95% of the reason why we have a functional system now. (I mean, I'm good but adjusting to that-which-was-LabTech was a thing and a half.) Be prepared to disable all kinds of built-in monitoring systems. Most of them are so, so noisy. Also be prepared to dig through the archives of this here forum in search of how to edit the built-in searches and monitors and scripts to deal with things like, oh, Automate building semi-bespoke disk space monitors for attached USB drives. Ahem. Learn the EDF (custom fields)/search/group triumverate. It is where the bulk of your successful automation efforts will take place, and it's ridiculously powerful and good. When asked, I reply that it's my favorite thing about having done the migration. (Second-favorite? Control, formerly-known-as-ScreenConnect.) Sean, above, makes a case for having some SQL knowledge. Which is good, but if you're cloud-hosted by CW themselves you'll have... limited access to the actual SQL server. We have generally made do without getting into the SQL weeds. Could we be doing better if we were SQL gurus? I don't doubt it. But don't feel like you're going to be hamstrung without it. User permissions are super duper weird. In order to grant an account the ability to change an endpoint's location we had to give their class access to... I think it was one of the Reporting tickyboxes? Whatever, it was super counter-intuitive. Other than the rickety agent-deployment process itself, permissions are the most frustrating thing in Automate. Think through every possible outcome when building your scripts, and leave breadcrumbs (log entries with shell command results, mostly) for "well this shouldn't have happened" situations. Later!You will thank Today!You when troubleshooting why your script didn't do what you thought it would do. Scripts have permissions. Make sure that user classes that don't need access to the more dangerous scripts (like ones that perform queries against the live Automate database) aren't available to your rank-and-file techs, let alone any high-end clients you may decide to allow any kind of Automate access. That'll do for now, I suppose.
  • Create New...