Jump to content

KKerezman

Members
  • Content Count

    42
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by KKerezman

  1. Thanks to the OP for this script! I made a couple of changes but the core of it was what I needed to get started. I'm filtering out results where the "IDProcess" result is '0' (the Idle processes) and dropping the thread ID data. I also run the PS command as system instead of As Admin, since that fails in a lot of cases. My current full working PS command: Get-WMIObject -cl Win32_PerfFormattedData_PerfProc_Thread | ? {$_.Name -notlike '*_Total*' -and $_.idprocess -notlike '0' } | Sort-Object PercentProcessorTime -desc | select -first 20 | ft -auto Name,IDProcess,PercentProcessorTime I don't use OutFile either, just capturing the PS result natively and emailing the report via this text in the message body: <pre> %powershellresult% </pre> That way Outlook formats the table output in readable fashion.
  2. Ah, goody. It's so nice when we onboard a client and tell them to look for our branded icon and instead it's the green gear. Quality workmanship there, CW team.
  3. Nobody's ever accused any of us in our organization of being too classy, but I'll look into this nonetheless... 😅
  4. Ian, I'm totally sending my boss your way the next time he gripes about the desktop client closing down after he leaves it running 24/7. 😄
  5. (Apologies for the necroposting.) I'm glad I'm not alone in the frustration with trying to script shell commands on Mac agents. I'm getting a lot of "OK" or even weirder results. For instance, I'm trying to run a 'date' command (I need to populate a variable with "today's" date in a particular string format so I can then parse a log file looking for that string match) and yes, 'date' takes variables in %X format which Automate needs escaped by doubling up the % signs, great, fine. So why does the Shell step of "date '+%%m/%%d/%%y'" give me a %shellresult% of "%m/%d/%y"? Argh. I can run the command (minus the extra % signs) in Terminal on a Mac endpoint and get the expected/desired outcome. I'm not asking for anything terribly complicated. I'm starting to feel like CW just kind of stopped all Mac-related development shortly after buying LabTech and I should not bother trying.
  6. Yikes... I don't think ticket comments in Automate have any awareness of the idea of internal notes on the Manage side, I'm afraid. I'd be glad to be proved wrong though.
  7. Hello, @KyotoUK, hopefully what we've done may be of some use. Here's the rough breakdown of the script we use. (And we're using an EDF to track if we have or haven't added/updated the admin account.) Shell 'net user [username]', IF %shellresult% contains [username] then jump to part of script where we just use a batch file to update the password. IF %shellresult% doesn't, we need to create a new account. New account - Use the 'add user' script step, then a shell command for 'net localgroup Administrators [username] /add', and to make the password not expire we run a 'WMIC USERACCOUNT WHERE (Name='[username]' and Domain='%computername%') SET PasswordExpires=FALSE', set the tracking EDF to 1, then exit the script. Update password instead - Download batch file which basically just runs 'net user' to set the password. We use that WMIC command again just to reinforce the 'no expire' part. Then delete the batch file and set the tracking EDF to 1. Why the batch file? Because a 'net user' command and its results show up in the Commands list on a given machine, so folks who have access to Commands logs but shouldn't know the special password will get to see it. (Oh yes, use ECHO OFF in that batch file.) The EDF is just a toggle to say "yes, the special admin account has been taken care of." Then you can use its 0 or 1 status to do a search that populates a group. That group can be set to run the add-or-update-admin-user script every couple of hours. Once the script runs, the EDF/search will take care of removing the endpoint from the group.
  8. Chiming in real quick to say: We ended up abandoning the Sharepoint ISO hosting after one too many "file, what file?" situations. Since we're paying for a Wasabi storage account, we made a bucket with a public read-only ISO in it instead, that seems to be working much, much better. Also the Win10 mount trick @SteveYates described is working a treat. Nice one!
  9. I'm glad I found this post. I just finished fiddling with my copy of Drive Details to create Drive History - 90 Days, with less fiddly data I don't want (why is there output for the drive letter when the header for each section is the drive letter?) and a nice wide expanded graph per drive for better visibility, plus actual sorting the drives in letter order (the original Drive Details was listing our fileserver's drives in order D, C, F, then E). I will never enjoy bashing around in DevExpress but hey, whatever gets the job done.
  10. Jay, I can totally do that. You just need to prep a data field under Computers called something like "Windows 10 Version" (put it into whatever folder you like) and then schedule a periodic run of a data collection script. Our script is a bit brute force, really: Powershell command '(Get-ItemProperty -Path ‘HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion’).CurrentBuild' and a series of IF checks (if powershellresult not = 10240 THEN jump to Not1507, set Windows 10 Version EDF to 1507) and so forth. Twice a year we have to expand the script to accommodate the new release ID number. You'll need the list of what "currentbuild" strings correspond to which release ID, that's available on Wikipedia and such. Steve, that's pretty slick. I'd forgotten about Win10's ability to mount ISOs without 3rd party utils. I'll have to look into bringing some of that into our updater system.
  11. I was today years old when I learned there's a REST API inside each ImageManager install. Sweet! Now, why ShadowControl is claiming our 7.5.6 installs are "current" when clearly there've been a couple of significant releases in the meantime is anyone's guess...
  12. Chiming in on this because I just dealt with trying a non-working method to do this and followed up with an actually-working (so far) method. At first I tried setting up an Event Log Remote Monitor for the group in question (long story short, I want to update Bitlocker status with a script after reboots so I know if people have used the "suspend" command incorrectly pre-reboot), but while the monitor applied to each machine it refused to do anything. The solution? Copied one of the built-in "EV - " Internal Monitors, specifically the BlackListed Events - Symantec Endpoint Protection one, and replaced its Additional Condition with: eventlogs.`Source` = 'EventLog' and eventlogs.eventid=6009 And set it to Once Per Day mode, hourly checks. (I don't need the script to run more than once a day on any given machine, really.) I also had to add an Event Blacklist (Dashboard -> Config -> Configurations -> Event Blacklist) entry for ID 6009, source EventLog, Log Name of System, Event Type of Information, with Message and Category left as % wildcards. With that, and Enabling this new Internal Monitor in my Bitlocker specific Group set to fire off the Alert Template that runs my detection script, I seem to be in business. Make of all this what you will! Maybe there's a slightly more efficient way? I'm all ears.
  13. I had a nice hour-long (or so) talk with Mister Goodwin, the OP, a few days ago. If you want to chew the fat at some point I'm game!
  14. I was a 12-year veteran of Kaseya when we pulled the trigger on the Automate migration. Now that I'm about 10 months in, I have Thoughts. (Most of them good, some very annoyed, I suppose.) I can probably carve out some time to talk about how it went, what we like, what we miss, etc.
  15. You're welcome! Our EDF is populated by a script which runs this Powershell command: (Get-ItemProperty -Path ‘HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion’).CurrentBuild Then passes the results through a series of IF checks along the lines of "If @powershellresult@ NOT = 10240 THEN Jump To :Not1507, otherwise set EDF to 1507 and exit" and so forth. We just tack a new IF check on twice per year. The script gets run weekly against online Win10 endpoints to keep the EDF updated. This lets us do groupings and reports and so on.
  16. (...and then find a way to hack the UI to turn the online/offline flag indicator a different color, say blue for instance, *coughKaseyacough*) 😑
  17. Your mileage may almost certainly vary but for the sake of hopefully pointing you in a useful direction, I'll describe our current system. We imported and updated our old kludgey way of doing it from Kaseya (we migrated after the end of last calendar year). The script goes a bit like this: Check for and deploy 7-Zip if needed. (The standalone executable can't do ISOs.) Download the ISO. (We're storing it at... ugh... SharePoint at the moment. Hey, it works.) Create working folder and extract ISO into folder. Make sure the setup.exe is actually present and bail with a detailed email to the tech if it isn't, since clearly something went horribly awry with the preceding steps. Pop up a "please don't turn off your system, you asked for this" message. Run 'path\to\setup.exe /auto upgrade /ShowOOBE none /quiet' And yes, there's a cleanup script to be run afterward. If the tech has a ticket to be doing this at all, they should be tidying up as part of that ticket. As to how we deal with feature updates to Win10 logistically, it's a matter of looking through the search we built looking for likely candidates, then coordinating time with the clients for access to the relevant machines. (The search is centered on an EDF which we populate with the build number.) It may not be fully automated but so far it's been working moderately well.
  18. We migrated ~2500 endpoints from Kaseya to Automate over the start of the year going into mid-springtime. I have a few thoughts, which may or may not match up with anyone else's, so... grain of salt, I guess. Make the heck sure that you have consultation hours banked with ConnectWise. You will have questions, you want a consultant available who can get you answers. We did this and it's 95% of the reason why we have a functional system now. (I mean, I'm good but adjusting to that-which-was-LabTech was a thing and a half.) Be prepared to disable all kinds of built-in monitoring systems. Most of them are so, so noisy. Also be prepared to dig through the archives of this here forum in search of how to edit the built-in searches and monitors and scripts to deal with things like, oh, Automate building semi-bespoke disk space monitors for attached USB drives. Ahem. Learn the EDF (custom fields)/search/group triumverate. It is where the bulk of your successful automation efforts will take place, and it's ridiculously powerful and good. When asked, I reply that it's my favorite thing about having done the migration. (Second-favorite? Control, formerly-known-as-ScreenConnect.) Sean, above, makes a case for having some SQL knowledge. Which is good, but if you're cloud-hosted by CW themselves you'll have... limited access to the actual SQL server. We have generally made do without getting into the SQL weeds. Could we be doing better if we were SQL gurus? I don't doubt it. But don't feel like you're going to be hamstrung without it. User permissions are super duper weird. In order to grant an account the ability to change an endpoint's location we had to give their class access to... I think it was one of the Reporting tickyboxes? Whatever, it was super counter-intuitive. Other than the rickety agent-deployment process itself, permissions are the most frustrating thing in Automate. Think through every possible outcome when building your scripts, and leave breadcrumbs (log entries with shell command results, mostly) for "well this shouldn't have happened" situations. Later!You will thank Today!You when troubleshooting why your script didn't do what you thought it would do. Scripts have permissions. Make sure that user classes that don't need access to the more dangerous scripts (like ones that perform queries against the live Automate database) aren't available to your rank-and-file techs, let alone any high-end clients you may decide to allow any kind of Automate access. That'll do for now, I suppose.
  19. Yeah, un-retiring has been a joke here. I ended up in a two-hour support chat session and the baffled tech basically punted to "rip out everything on the PC and start over." (We could run scripts and send commands but NOTHING to do with the ScreenConnect piece would work. Un-and-re-installing had no useful effect. Support had us play around in the relevant parts of the registry and everything.)
  20. Our main use of Manage's Configurations is for billing purposes and as a waypoint along the way to becoming an IT Glue Configuration. All a Manage Configuration is, is just a way to document assets in that environment. Before ITG came along it was where we had techs create most of the non-RMM-provided assets (printers, routers, switches, NAS/SAN units, etc). If you want to sync more than just the built-in Managed Server and Managed Workstation types, you'll want to do some reading & planning on what other kinds of devices you want to sync, etc. (We have Types for various devices that we sync to IT Glue, tagged with "-ITG" in the name for easy spotting. YMMV, etc.) In order to control which clients' agents & devices become Configurations, you want to be in the Agreement Mapping section of the Manage Plugin in Automate. Build an Asset Template (button on the bottom of that display) for the agreement type of that client, decide what Automate asset type becomes which kind of Manage configuration type (and what product in Manage), and select that template in the drop-down next to that client in the list. Without an assigned template, the plugin doesn't have any way to know what to do with the assets so it just won't. So only the clients you've selected a template for will get sync'd. There's one big honking-huge configuring-the-Manage-plugin doc here: https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/080/040/020/040 I just went through the process of getting devices out of the Network Probes into Manage/ITG so I might be able to answer certain types of questions, but I'm far from an expert at this point!
  21. You can't edit Agent Monitor Creation - Disk* but you can copy it, then just edit the step in the Agent Monitor Creation* script that runs Agent Monitor Creation - Disk* to run your copy instead.
  22. Well of course I was coming at this the long way 'round. Adding the Client Name column to your instructions got me what I needed. (I didn't mention the Client Name requirement in the original post because I hadn't even gotten as far as the Computer Name, and figured if I could solve that then the client info would be solvable through a similar method.) May Washu-sama bless you and keep you, good sir.
  23. Hello, neighbors. I'm still only a few months into this Automate experience. I spent the morning going over https://www.gavsto.com/labtech-report-center/ (missing images and all) as well as what I can grok of the DevExpress docs online. All this so I can make a single-table report to show which machines have what CrashPlan versions installed. (Can this be done in a Search? KIND OF, but as soon as you do Application Match you lose the ability to get anything under that match group into the Excel results, grr, argh. Can it be done in SQL? Maybe but I'm not a SQL guy and we're Hosted so our access to SQL is very limited. Can it be done in a Dataview? Boy wouldn't a non-wonky Dataview editor be nice!) Anyway: Instead of adding new forehead-shaped dents to my wall, I'm here to see if anyone can nudge me in the right direction. All I really want, at the moment, is to make it so I can get the computer name to match up to a computer ID in my little software installed-version table. I have two Queries in my Data Source Editor: CrashPlan Software - Source: 'software', Where SQL: Name like '%CrashPlan%', Included columns: ComputerID, Name, Version, DateInstalled. Basically this does work! And if I didn't mind looking up agent ID numbers I wouldn't need to tie into the other table, but I do mind so I do need the other table. Client Computers - Source: 'computers', Included columns: ComputerID, Name, ClientID. I have a parent/child relationship tied to the ComputerID field. Great, but no matter what I do, regardless of which I make the Parent and which the Child, when I try to bring in computers.Name and/or computers.ClientID I get either the first result repeated or blank results. Is this just brokenness in the Report Designer or am I fundamentally misunderstanding how this is supposed to work?
  24. Darren, Sorry about the delay, I apparently didn't set myself to get notified on replies. Good questions! The users in question are assigned to All Clients. The Location's "Deployment and Defaults" has been left Not Selected. (Our migration consultant stressed making sure there's a Login for Admin Access selected and that the Default template is chosen, but nothing about the Default Group For New Agents option. If we're missing a best-practice there, I'm all ears.) Since we're hosted I can't really check the server to see if it's having issues building the searches/groups. But our techs are definitely set to All Clients, which one would think would give them what they need. I might try enabling Show All under Computers in the User Class Manager, though I feel like that might bite me in the backside if we need to carve out a client out from their visibility later on. Thanks for the info!
×
×
  • Create New...