Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 04/09/19 in all areas

  1. 6 points
    I found that Duong's script worked great when it was updating Dell SupportAssist 3.x, but when the computers had Dell SupportAssist 1.x or 2.x installed it would simply install the new version side by side. I updated it to include some uninstallation logic - first running through all the known uninstallers and then if that fails doing a manual removal of the old software. Then it goes into Duong's installation script to push the new version. This has fixed 99% of the problem computers I had automatically. Dell - May 2019 - Dell Support Assist Vulnerability - Updated.xml (Script called "Dell - May 2019 - Dell Support Assist Vulnerability" found in the "Share" folder if you leave it default)
  2. 6 points
    Just to help the community. I have our system run a bitlocker get status script on all devices. I have made several EDF's for the device to store information: All of these are LOCKED so none of the techs can alter or adjust them. TPM Enabled. This runs the Powershell command: get-tpm | select -Expandproperty Autoprovisioning There is an IF statement where if the TPM is enabled it Marks the TPM EDF as enabled. This tells me if the device is able to be encrypted (we try to always have TPM) Checks if Bitlocker ProtectionStatus is on. Runs the Powershell command: Get-BitLockerVolume -MountPoint "C:" | Select -ExpandProperty ProtectionStatus There is an IF statement where if the Protectionstatus is ON it will check the 'BitlockerEnabled' box. The script will always run the 2 Powershell commands below regardless if bitlocker is enabled.. Bitlocker Recovery Key: Powershell command: manage-bde -protectors -get ๐Ÿ˜„ Get Bitlocker Status of C:. Powershell command: manage-bde -status ๐Ÿ˜„ The 'Date checked for Encryption' is a self diagnosing piece to tell me when the script was last ran. Example Contents of each EDF: Bitlocker Recovery Key: BitLocker Drive Encryption: Configuration Tool version 10.0.17134 Copyright (C) 2013 Microsoft Corporation. All rights reserved. Volume ๐Ÿ˜„ [] All Key Protectors TPM: ID: {E123456-E123-F123-F123-D123456789012} PCR Validation Profile: 0, 2, 4, 11 Numerical Password: ID: {1DDB4148-A123-B123-C123-B12345678901} Password: 123456-123456-123456-123456-123456-123456-123456-123456 Bitlocker Status of ๐Ÿ˜„ BitLocker Drive Encryption: Configuration Tool version 10.0.17134 Copyright (C) 2013 Microsoft Corporation. All rights reserved. Volume ๐Ÿ˜„ [] [OS Volume] Size: 475.49 GB BitLocker Version: 2.0 Conversion Status: Used Space Only Encrypted Percentage Encrypted: 100.0% Encryption Method: XTS-AES 128 Protection Status: Protection On Lock Status: Unlocked Identification Field: Unknown Key Protectors: TPM Numerical Password Attached is the SQL for importing the EDF files as well as the script used. Help this helps! Get Bitlocker Status of Device.xml Bitlocker-ComputerEDF.sql
  3. 5 points
    Hey all, I've been active in the community for a few years now but have never really posted in the forums. I've put together a script/remote monitor to address the latest RDP vulnerability from Microsoft and figured I've learned enough from the MSPGeek community it can't hurt to give some back. This first link is a SQL inject that will create a remote monitor on your "Service Plans\Windows Servers\Managed 24x7" and "Service Plans\Windows Workstations\Managed 8x5" groups. What groups it installs the monitor on are just defined on the inject with the GroupID so if you just look at the inject it's easy to change that GroupID to whatever you want before you run it. !!!WARNING!!!! - You're running a SQL inject on your DB...this can be dangerous, proceed at your own risk. Read through the inject, make sure you're comfortable with what it's doing. This monitor is also live pulling a powershell script from MY github. This means if I decided to have a bad day and change the powershell script in my github to something malicious then I could effectively run my malicious code on ALL of your machines. I'm not malicious, but ya know...be smart, be safe! Feel free to host the powershell script at your own location and just swap the URL on the monitor. Lastly, I've tested this on several machines in my environment, but that doesn't mean there can't be an issue I haven't ran into yet. If you find a problem, let me know so I can fix it! Download Links SQL Inject: https://github.com/dkbrookie/Automate-Public/blob/master/CVE/CVE-2019-1182/SQL/CVE-2019-1182_Remediation.sql Powershell: https://github.com/dkbrookie/Automate-Public/blob/master/CVE/CVE-2019-1182/Powershell/CVE-2019-1182.ps1 Script breakdown... This script is outputting either !ERROR:, !WARNING:, or !SUCCESS: with details on the state of the install process. If you set the monitor alert template to create a ticket (I have it set to Default - Do Nothing so just change to what you want) it will output the Powershell results right into the ticket. The keywords from the script output above are to use in a state based remote monitor in Automate so this will go through what that looks like briefly. The script checks the OS of the machine and figures out the correct KB number it needs to have installed to patch this vulnerability. Once it finds the right KB, it checks to see if the KB is installed or not. If it's not installed, it will install it with no reboot so this is safe to run mid-day. That means right from the monitor CHECK it is actually installing the remediation, so there is no separate script attached. The patch download/install is all self contained in the monitor check itself. !FAILED: will only output if the machine is eligible to receive the CVE-2019-1182 patch and something in the script actually failed and needs attention !WARNING: will only output if the machine is not eligible for the CVE-2019-1182 patch. The reason I've chosen the all managed servers/workstations groups is so you can highlight all of the machines quickly/easily in WARNING state that do not have this patch available to them. This would be a good time to use this as leverage to get your clients to upgrade some machines !SUCCESS: will only output if the patch has been verified to be installed Monitor breakdown... The monitor will be named "CVE-2019-1182 Remediation" The monitor runs every 4hrs but you can change this to whatever you want FAILED state: Looks for the keyword "!ERROR:" from the powershell output WARNING state: Looks for the keyword "!WARNING:" from the powershell output SUCCESS state: Looks for the keyword "!SUCCESS:" from the powershell output Enjoy! -Rookie
  4. 4 points
    As we all know managing windows updates can be a pain sometimes, and thanks to the report created by Gavsto (https://www.gavsto.com/free-report-get-a-second-opinion-on-your-patching/) we've discovered the patch compliance reporting within Automate can be very misleading. Using this report I started to dig into the anomalies and discovered two problems The report showed the last Cumulative Update and when the installation was attempted, whether it was successful or not I didn't want to rely on a manual task to review a report, I wanted automate to raise a ticket when a machine fell behind by 35 days so we can go fix the issue Based on these issues, I decided the best way to proceed was to expand upon Gavsto's idea. Since we're trying to verify the actual current patch level of every computer, I didn't want to rely on the information already existing in Automate in case it was incorrect. I figured the most accurate source of the current patch level would be the computer itself. I created two new extra data fields and populated them with a small powershell script. The logic for the powershell script followed the same principal as the report, list out the latest cumulative update's that were installed but with the added condition of making sure the update was successful. The EDF's I created and the powershell scripts to populate them were Last Successful Patch - The Title of the patch that was last installed $Result = @(); $Session = New-Object -ComObject Microsoft.Update.Session; $Searcher = $Session.CreateUpdateSearcher(); $HistoryCount = $Searcher.GetTotalHistoryCount(); $Result = ( $Searcher.QueryHistory(0,$HistoryCount) | Where-Object { ( $_.ResultCode -eq 2 -or $_.ResultCode -eq 3 ) -and ( $_.Title -like '*Security Monthly Quality*' -or $_.Title -like '*Servicing Stack Update*' -or $_.Title -like '*Cumulative Security Update*' -or $_.Title -like '*Cumulative Update For*' -or $_.Title -like '*Feature update to*' ) -and ( $_.Title -notlike '*Cumulative Security Update For Internet Explorer*' -and $_.Title -notlike '*Cumulative Security Update for ActiveX*' -and $_.Title -notlike '*Cumulative Update for .NET Framework*' ) } | Sort-Object Date); If ($Result.count -gt 0) { return ($Result[-1]).Title } Else { Return "No CU Patching Information Available" }; Last Successful Patch Timestamp - The date and time this patch was installed, in a YYYYMMDD-HHMMSS format $Result = @(); $Session = New-Object -ComObject Microsoft.Update.Session; $Searcher = $Session.CreateUpdateSearcher(); $HistoryCount = $Searcher.GetTotalHistoryCount(); $Result = ( $Searcher.QueryHistory(0,$HistoryCount) | Where-Object { ( $_.ResultCode -eq 2 -or $_.ResultCode -eq 3 ) -and ( $_.Title -like '*Security Monthly Quality*' -or $_.Title -like '*Servicing Stack Update*' -or $_.Title -like '*Cumulative Security Update*' -or $_.Title -like '*Cumulative Update For*' -or $_.Title -like '*Feature update to*' ) -and ( $_.Title -notlike '*Cumulative Security Update For Internet Explorer*' -and $_.Title -notlike '*Cumulative Security Update for ActiveX*' -and $_.Title -notlike '*Cumulative Update for .NET Framework*' ) } | Sort-Object Date); If ($Result.count -gt 0) { return ($Result[-1]).Date.ToString('yyyyMMdd-HHmmss') } Else { Return "No CU Patching Information Available" }; The added benefit from this approach was it didn't matter how the patch was installed, whether through automate, manually or by a third party, it should always appear which is great to get an accurate sense of patch level when on boarding a new client. Now we had the latest patch information from the computer in Automate I simply created an internal monitor to compare the timestamp and log a ticket for anything that hasn't successfully patched in over 35 days. So far this has been working for me, it's also highlighted a few machines that haven't been patched since last year that didn't show up in the report. A few ideas we've had so far to improve upon this Add some verification to only log a ticket if the machine has been online during the past 35 days Run some autofix actions prior to logging a ticket such as Force the EDF's to update and revalidate Attempt to run a patch job I'd love to hear peoples feedback on this approach, any gotcha's you might see catching us out in future, or ideas to improve on the solution. If nothing else hopefully this posts helps some others detect systems that aren't patching properly.
  5. 4 points
    Please try to donate if you can. A lot of time went into this plugin - DOWNLOAD HERE Thank you to my donors: Mendy Green Gavin Stone Matt Navarette ITLogix, LLC Derek Leichty Kevin Bissinger DataServ Corporation NeoLore Networks Inc What this plugin does: This plugin will display your passwords that are on client and location levels in labtech and present them for sending in screenconnect sessions. Gone are the days of copy/pasting passwords. Requirements: Labtech 11+ ScreenConnect 6.1+ How to install: First, head over to the extensions section in ScreenConnect and install the RMM+ passwords extension. ScreenConnect 6.1 or greater is required Second, install the attached Labtech plugin. RMM+ Password Link Third, go and configure some timeout settings. A. Token Valid For How Many Minutes: This setting is an absolute token timeout in minutes. set to -1 to make tokens last forever. B. Token Idle Expire Minutes: This setting is an idle timeout so if the plugin isn't used or a password isn't queried or passed for this many minute it will invalidate the token. C. Only Allow One Token Per User: This setting will allow users to only have passwords on one computer(checked) or any number of computers(unchecked) Fourth, set permissions for the plugin. Fith, Activate by doing the following in Screenconnect Fill in your Labtech server url(please only use SSL or you will be sending passwords in clear over an unencrypted connection),Labtech username and Labtech password. Decide what items you want to show and login. As of 1.0.22 the automate URL is no longer in the passwords helper. Please set by going to Options->Edit Settings in the control extension. Sixth, Enjoy saving time!!! Changelog: ScreenConnect/Control Plugin 1.0.5 - Initial Release 1.0.6 - Graphical Updates 1.0.7 - Added ability on login form to save username. Fixed issue where command section would lose access key used to send commands. 1.0.8 - Fixed bug that wouldn't allow brackets in script names, password names or password values. 1.0.9 - Fixed a bug that was introduced in the regex in 1.0.8 that would not allow it to detect empty results. Added ability to type "wake" into the command line and send the ScreenConnect version of wake to computer. 1.0.10 - Corrected issue with wake command not being recognized if case was not all lower. 1.0.11 - Corrected issue to properly escape strings sent in commands. 1.0.12 - Updated command to not wait full timeout if data returns quicker. Also prepends !#timeout to the command for SC. 1.0.13 - Updated script list to contain actual folder structure as appears in LT. Added a password filter box. Now allows to show/hide sections on demand and includes a client/location picker that allows you to change locations or clients so if you have generic passwords stored on your internal client location for instance. Added ability to hide passwords per user to get rid of the extra clutter that isn't needed. 1.0.14 - Various CSS fixes(thanks Andrea). Added filter box to auto highlight top match in password list and enter key will send highlighted match. 1.0.20 - Added ability to send carriage return after password. 1.0.21 - Updated methods for Control version 19 compatibility 1.0.22 - Added optional dark theme to match Control version 19. Automate URL is no longer in the passwords helper. Please set by going to Options->Edit Settings in the control extension. 1.0.24 - Moved Automate URL back to login form because Control now ties the helper to the signature file. 1.0.25 - Fixed broken commands from 1.0.22. 1.0.27 - Added code to allow MFA for login. Uses same MFA code as /automate login. 1.0.28 - Added checkbox to always show MFA box so that login can be single step instead of 2 step. 1.0.29 - Changed the way you can execute commands to require a shared key between Automate and Control. This must be set in the plugin settings to use commands in the helper. Labtech Plugin 1.0.0.1 - Initial Release 1.0.0.2 - Added permission requirement to Read Passwords for client. If a userclass doesn't have the Read Passwords permission it will not let them go to the SC plugin. 1.0.0.3 - Added the ability to show username instead of display name as password identifier. 1.0.0.5 - Corrected issue with incorrectly determining permissions(Thanks Eric Besserer for the help). Added ability to block/allow script scheduling per location. 1.0.0.7 - Fixed issue where login token was set for more than 24 hours would invalidate each night. Various other bugfixes. 1.0.0.8 - Added regex include and exclude options for selectively showing passwords. 1.0.0.9 - Added auditing of passwords sent, passwords copied and scripts sent to the Dashboard Audit section. 1.0.0.11 - Added ability to regex include or exclude password from view globally. Security update to only show clients in client selector that someone has access to. 1.0.0.13 - Fixes to regex include/exclude and superadmin permissions. 1.0.0.14 - Added ability to send carriage return after password. 1.0.0.16 - Corrected permission issue that was only looking at client level permissions and not computer permissions therefor allowing group permissions to work correctly. 1.0.0.17 - Corrected permission issue that was only looking at client level permissions and not computer permissions therefor allowing group permissions to work correctly. 1.0.0.18 - Added security feature for IP filtering. The IP address that is captured on requests is whatever IP the HOST client of ConnectWise Control is running on. 1.0.0.20 - Added MFA login using the same code that you use for /automate. REQUIRES Control plugin 1.0.27 or higher. 1.0.0.21 - Added shared key that must match the control plugin settings key in order to execute commands from helper. Please try to donate if you can. A lot of time went into this plugin - DOWNLOAD HERE Thank you to my donors: Mendy Green Gavin Stone Matt Navarette ITLogix, LLC Derek Leichty Kevin Bissinger DataServ Corporation NeoLore Networks Inc EDIT: as of the labtech version 1.0.0.5 permissions need to be enabled on a client basis for users to have access to passwords and scripts.
  6. 4 points

    Version 1.1.0

    105 downloads

    Once a role has been detected for an agent, it will remain in the list of roles for that system even if the detection rule no longer applies. There are no timestamps recorded for role changes so it is impossible to know if the non-detection state is short term or permanent. This Internal Monitor named "Expire RoleDections Not Detected For 7 Days*" will identify inactive roles on an agent, which creates a separate active alert for each role on the agent with a timestamp for when the role was first found missing. The RAWSQL monitor is three queries in one. The first one checks for any role that was reported missing more than 7 days ago, and deletes the role from the agent (based on the alert timestamp). The second query deletes role alerts from the history if the role is found to be active, or no longer exists on that agent. The last query is what actually detects missing roles to generate alerts. With the expired roles and alerts removed from the agent by the first queries, the active alert in the monitor will clear (heal) for that role also. The role must be continuously non-detected.. If it is ever found to be a detected role before 7 days has passed, the alert will clear (query #2) and the monitor will start the clock again the if the role becomes missing again. Manually assigned "Apply" and "Ignore" Roles are preserved, only automatically detected roles are candidates for cleanup. If you want your roles to clear quicker, change the date adjustment in the first query from "-7 DAY" to whatever interval you believe is appropriate. This monitor has been updated/improved since it was first released. The attached SQL should safely update any existing version of this monitor and it is recommended that you update even if you have this monitor in place and working as this specific configuration may not have ever been published before.
  7. 3 points

    Version 1.0.4

    584 downloads

    The Internal Monitor "Notify When Agent is Online" watches machines with the "Notify When Online" computer EDF configured. It will send an alert as soon as it finds that the agent is offline. (The offline notice is skipped if the agent was already offline when notifications were first enabled.) When the agent comes online again another alert email will be sent and the EDF will be reset. This monitor can be used to notify when a lost computer comes online, or when that machine that is only online in the office every few weeks is back. To enable notifications for an agent, you simply put your email address into the "Notify When Online" EDF. You can enter multiple addresses separated by ";". The contents of the agent's "Comments" will be included in the email also. (Helpful to remember why you wanted to be alerted, or what instructions should be followed after receiving the alert.) When the agent returns online, the Network Inventory and System Info are refreshed. The recovery email will include the following details: The last check in was at @AgentCheckIn@. Public IP Detected: %RouterAddress% Internal IP: %LocalAddress% System Uptime: %uptime% Last Logged in User: %lastuser% This bundle includes a Script+EDF XML, and a SQL file with the Internal Monitor. To import the Script and EDF, select Tools -> Import -> XML Expansion. After import the script should appear in the "\Autofix Actions" folder. To import the Internal Monitor, select Tools -> Import -> SQL File. The monitor should be imported AFTER the script bundle has already been added. After importing, verify that a valid Alert Template is selected for the monitor. The Alert Template MUST have the "Run Script" action enabled without any script specified in the template. (The script is set on the monitor) Read the Script Notes for advanced control over the number of times a notification will be triggered.
  8. 3 points
    Please note that the below is not an exhaustive list, but is an indication of how members and vendors are expected to behave within our community. There may well be periodic changes made to ensure we maintain the integrity of our community. Code of Conduct - Members: 1. Support the MSPGeek mission of open communication, collaboration, learning and sharing with other members and vendors. 2. Be respectful and courteous to all members, staff and vendors. 3. Share your knowledge and experience with others. 4. You are fully responsible for your own content. You need to make sure you have relevant permission to post any information or material on our public-facing platforms. 5. If you utilize someone elseโ€™s work and improve on it, you should share this back to the community as well as recognizing the author of the original work. 6. Do not post private information about individuals without their explicit consent. 7. We strongly suggest that users who wish to post files, scripts, solutions etc do so in the MSPGeek forums and then link to it from within the Slack. 8. Do not post anything that a reasonable person would consider offensive, abusive, or hate speech. Admin's decisions on this are final. 9. Members should not send other members direct messages for commercial reasons (IE: trying to sell or advertise something) unless specifically invited to. 10. Raise concerns of inappropriate behaviour to the Admins. Code of Conduct - Vendors: 1. MSPGeek is primarily a community for open communication collaboration, learning and sharing and we expect vendors to have the same values. 2. Where vendors have had a channel created for them, they agree to staff it and answer queries in a reasonable amount of time. Where vendors periodically abandon their channel, the channel will be removed. 3. Vendors should not advertise their services or products in any of the general channels. 4. Vendors should not solicit members via direct message unless specifically invited to 5. Vendors should prefix/suffix their display names in Slack with their company/product name so as to identify themselves as a vendor. 6. Vendors are encouraged to engage with members in the general channels. If members are actively discussing your product/service, then you can enter into conversation about it in that general channel. 7. Vendors are free to advertise their own services in their own channel. 8. Vendors are free to set their own description for their own channel. 9. Vendors who want to advertise something to the wider community because they feel the community would value from it, should approach the Admin team who will take a decision on it. 10. Vendors should not utilize member information outside of the community for the purpose of advertising or communication unless commercially engaged by a member.
  9. 3 points
    My script isn't safe for public consumption, but most of the heavy lifting is done by Powershell. Here are the two important parts of it. Part One, creating or updating the user. $Password = ConvertTo-SecureString -string '@Password@' -AsPlainText -Force $Username = '@Username@' Try{$User = Get-LocalUser -Name $Username -ErrorAction Stop; write-host "User exists"} Catch {write-host "User does not exist"} if(!$User) { Try{$UserCreation = New-LocalUser -Name $Username -Description "MSP Local Administrator Account" -Password $password -ErrorAction Stop; $User = Get-LocalUser -Name $Username -ErrorAction Stop} Catch {write-host "User Creation Failed: $_.Exception.Message"} } if($User) { Try{$GroupAddResults = Add-LocalGroupMember -name Administrators -Member $User -ErrorAction Stop; Write-Host "User added to group"} Catch {Write-Host "User Already in Admin Group"} Set-LocalUser $User -PasswordNeverExpires $true Add-Type -assemblyname system.DirectoryServices.accountmanagement $DS = New-Object System.DirectoryServices.AccountManagement.PrincipalContext([System.DirectoryServices.AccountManagement.ContextType]::Machine) if($DS.ValidateCredentials($Username, '@Password@')) { Write-Host "Credentials do not need to be updated" } else { Try {$UserUpdate = Set-LocalUser $User -Password $Password -ErrorAction Stop; Write-Host "Credentials Updated"} Catch {Write-Host "Updating of credentials failed"} } } Part Two, verifying the credentials are correct. Add-Type -assemblyname system.DirectoryServices.accountmanagement $DS = New-Object System.DirectoryServices.AccountManagement.PrincipalContext([System.DirectoryServices.AccountManagement.ContextType]::Machine) if($DS.ValidateCredentials('@Username@', '@Password@')) { Write-Host "Credentials Correct" } else { Write-Host "Credentials Incorrect" } I use machine level EDFs to store the password after I've verified it to be correct. The first script also verifies the credentials, but for safety I verify a second time with a new powershell session so I can be 100% sure the password was updated. At the beginning of the script I check to make sure the role of AD Domain Controller isn't present, so that we don't accidentally end up creating the domain user, and then changing the password for every domain controller on a network.
  10. 3 points

    Version 1.0.0

    659 downloads

    This role definition detects when Bitlocker is enabled on a machine. To import these Role Definitions, in the ConnectWise Automate main screen, go to Tools > Import then choose SQL File. Browse to the relevant file, and OK the message about inserting one row.
  11. 3 points
    Not sure why I was slacking and didn't post this earlier, but here it is finally. This custom message box is designed as a replacement for the built-in CWA message box used for reboot prompts. However, it allows you to customize the colors, message, buttons, actions from the buttons, and responses from those buttons so that you have a better handle on the user response (or lack thereof). This is a standalone EXE that you would transfer via script from your CWA server using the File Download function. https://squattingdog.net/custom-message-box/
  12. 3 points
    The AV detection process works basically exactly like this: (I hate partial information, so I reviewed the code actually used to choose the AV ID from LTService version 190.225 (19 Patch 8)) Loop through AV Detection types in ascending order by ID, when done go to Step 10. Evaluate the OS Type setting. If the target OS specified doesn't match the current machine OS, Go to Step 1. Evaluate the Program Location path. Is it a valid file? If not, Go to Step 1. Evaluate the Definition Location. If it is blank, go to Step 1. Is it a valid file? Extract the timestamp as the Definition Date and go to Step 5. Is it a valid folder? Extract the timestamp as the Definition Date and go to Step 5. Use the "Date Mask" regex pattern to extract the Definition Date from the Definition Location value. If nothing was extracted, go to Step 1. Evaluate the Version Check value. If it is blank, go to Step 6. Is Version Mask blank? Go to Step 1. Is Version Check a file? Capture the file version as the Version Check value. Does the Version Mask pattern match the Version Check value? If not, Go to Step 1. We now have a complete "AV" profile to test. Is this the first AV ID candidate? Go to Step 7. Is the Definition Date equal to or newer than the last found AV ID? If not, Go to Step 1. The current AV ID becomes the currently "Chosen" AV ID. Evaluate the AP Process. (Split on ":" if found and loop). Does it match a process that is running? If not running, AV Running is set to False, Go to Step 1. The current AV ID is added to a list of running AV IDs. AV Running is set to True. Go To Step 1. Check the list of Running AVs (built in Step 9). If 1 or less were found, go to Step 14. Loop through the Running AV IDs. When done, go to Step 14. Does the Definition Location contain "Windows Defender"? Go to Step 11. The current AV ID becomes the currently "Chosen" AV ID. Go to Step 11. Is the "Chosen" AV AP Process value blank or does it end with "*"? Go to Step 17. Is the WMI Class \root\SecurityCenter2:AntiVirusProduct found? Set AV Running to the state indicated by the "ProductState" attribute and Go to Step 17. Is the WMI Class \root\SecurityCenter:AntiVirusProduct found? Set AV Running to the state indicated by the "onAccessScanningEnabled" attribute and Go to Step 17. Report the Chosen AV ID, and the AV Running State. When two AV definitions are compared, the first one tested (lowest ID) has the advantage, but it will still lose to another AV match with a newer signature date. And if multiple running AV products are found, one of them will be picked over a "newer" product that wasn't running. In general the first chosen Running AV with the newest definitions will be the one returned, and the AV Running state will be based on matching the process name or what Windows Security Center is indicating. (I think I summarized that right..) If your Primary AV product definitions get a day behind, a secondary AV could suddenly be the reported AV. In my experience, outdated AV definitions are the most common reason for Windows Defender to show up even when you know you have another AV product in place. If it's definition date is newer, it will be reported as the active AV product.
  13. 3 points
    And if you study the code you will see the simple scheme that ConnectWise uses to encrypt all passwords in the database and be able to decrypt them.
  14. 3 points
    Anyone having the problem where the deployment manager says "Device failed deployment readiness checks"? I've got the password set at: Location Probe Deployment Manager LTService Password doesn't have any special characters either. Not sure if anyone has run into this issue before? Seems to be happening accross all our gen2 probes.
  15. 2 points

    Version 1.0.0

    13 downloads

    This script pulls the most recent version of zoom from Zoom directly, and installs. It sets it up with auto update configured Created for a customer that uses zoom everywhere, shared to help those trying to keep ahead of the zoom security issues
  16. 2 points
    First of all, thanks @Gavsto and @DarrenWhite99 for helping me get the join file moved from the PDC to the joining computer. With that out of the way, attached is my script to join a computer that is not in the network of the domain. If you run it on any computer in a client, it will find the primary DC of that client, create an offline domain join file, move it over to the workstation, and finally join that workstation to the domain. It only works on computers that are currently in the workgroup "WORKGROUP" as a safety feature to make sure it doesn't run on anything it should. The computer does not need to be in the same network as the DC, but it can be, so this can be used for any instance where you want to join a computer to a domain. It does not allow you to log in with any domain credentials unless the computer can reach the DC. Offline caching of credentials, as far as I can tell, is not something that can be done. One improvement that I am probably not going to do any time soon is running an MD5 has on the source and destination files to ensure the file copy went exactly right, but it is probably a good idea. TNE - Offline Domain Join.xml
  17. 2 points
    Ensure PowerShell version 4.0 or higher is installed on the LabTech Server itself (Needed for Invoke-RestMethod to not prompt for credentials) Create the folder C:\Program Files\WindowsPowershell\Modules\LabTech on your LabTech server and place LabTech.psm1 in it Import Update-MissingWarranties.xml script into Automate, ignore warnings about version mismatch 11 - Tools->Import XML Expansion 12 - System->General->Import->XML Expansion Unrestrict the script execution policy on your LabTech Server Set-ExecutionPolicy Unrestricted -Confirm:$false -Force Open Immense Networks Scripts->Update-MissingWarranties, copy the value of the PowershellCode variable into Powershell ISE on the LabTech server and run it. There should be no errors about script execution or modules not found. You may get a rate limit error from the API, but that's relatively normal. Open your _System Automation->Onboarding->Initial System Configuration - Partner Script and insert a Script Run step that runs the Immense networks Scripts->Update-MissingWarranties script Open the Manage Plugin 11 - Click the Connectwise button on the top of the main LabTech Interface 12 - System->Manage Integration Set Manage Plugin to map the following fields (See screenshot below) PurchaseDate to Computer.Asset.TagDate WarrantyExpiration to Computer.Asset.WarrantyEndDate Update 2018-11-09: Refactored code, implemented bulk lookups with Dell API. Attempted to fix HP lookups. Works in small batches. Updated 2018-11-26: Fixed issue with function responsible for breaking computers into chunks of 100. It was returning a flat array with all objects. Modified trim functionality to only trim serial numbers that require trimming. Updated 2019-03-28: Filtered out Dell Digital Delivery entries from warranties *Note, you will have to remove all the current Dell warranty info in your database and re-run this script to get accurate data update labtech.computers set warrantyend=null where biosmfg like '%dell%'; Updated 2019-03-28 (2): HP: Added Part Number and Serial Number to HP API Requests as we found the HP warranty API usually needs both to find the warranty Dell: Updated AssetDate to be the ShipDate value that is returned from the API (It was previously returning the start date of the warranty with the latest end date) This will require you to clear out both your assetDate and warrantyEnd fields for Dell computers. Use the following query: update labtech.computers set warrantyend=null, assetdate=null where biosmfg like '%dell%' LabTech.psm1 Update-MissingWarranties.xml
  18. 2 points
    That sounds like a terrible situation. Others may have a better idea, but here's how I would handle it (I'll attach screenshots below): Patch Manager: Follow the screenshot, but here's why I selected what I did. Script runs in the early morning, so I unchecked Monday because that's Sunday night and still the weekend, however I kept Friday checked because that's Thursday night, which is still during the week. I went ahead and unchecked the First and Last week to make certain that the first and last few days of the month are left alone. The most important piece is the script before patching. It's going to check if it's a holiday (based on the dates you give it) and then make sure you check to cancel the patch job if the script fails because it's going to intentionally fail if it's a holiday. Holiday Script: Set an alert or create an auto generated ticket at the beginning (or really end) of each year and make sure that you've got the dates right. You can also add or remove holidays, just make sure you're setting the holiday based on the date format Month/Day/Year (don't add zeros to single digits in the date). Script will fail if it's a holiday which will cancel the patch job for that night. Script XML for uploading to your server (You can find it under Scripts > Dev Scripts): Holiday Check.xml Adjust accordingly and feel free to hit me up in Slack if you have any questions. Good luck and happy patching!
  19. 2 points
    Software Deployment Script Template I created this script to simplify new software deployments. Please remember that this is meant to be a template, to feel free to customize it to fit your deployment needs. โ€ข This script can be used for multiple software deployments from a parent script (such as Onboarding or a New Computer Build) What it does: - Checks to see of the Software is already installed on the computer. - Creates a ticket for accountability. - As the script runs, updates the status of the installation - Comments the ticket if the download fails. - Attaches the installation string and computers response when executed. - Verifies if the software is successfully installed or not. - Automatically closes the ticket once the software's installation is complete and verified. - Gives you the option to add billable time to the ticket. - Downloads the software installer from Web/FTP to the installation folder you specify on the local computer (default: C:\Support\Vender) - Ready for both x32 & x64 bit packages respectively. - Executes a MSI installation silently by default. - You can very easily edit & execute multiple line batch or Powershell scripts from the existing execute function (which then gets displayed in the ticket). - Customize it with whatever parameters /variables your installation script needs. - Updates the Software & Services inventory for that computer, then confirms that the software is successfully installed. This template is pre-loaded with all of the variables to download & install 7-Zip. Replace the content of the variables with your software information. Reminder: @SoftwarePackageName@ needs to match the exact name of the software as it exists in Add/Remove Programs. It's using this name to compare if the software is already installed and to confirm that it's successfully installed at the end. Template - Software Install MSI.xml Template - Software Install MSI.zip
  20. 2 points
    Thanks @Slartibartfast for the start of this script. Slarti's version includes a Windows 7 to 10 upgrade, but I ripped that out because it wasn't needed for my script. I then went on to add a whole lot of checks and balances to the script to make sure it goes smoothly, and alert if it didn't. The attached script will upgrade a copy of Windows 10 to a newer feature update. All it needs is a copy of the ISO(one for 32 bit and one for 64), placed in LTShare/Transfer folder. You can get the ISO's from Microsoft's Media Creation tool. It emails the initial results to the tech who ran the script, checking for most common failures I could imagine. The script will schedule itself to run again 60 minutes after the initial install finishes(which can take an hour or so after the download finishes). On the second run it will check to see if the upgrade was successful and email the results to the tech who ran the script. The entire process will take upwards to 3 hours, unfortunately. 90% of that time the computer is usable, but I would recommend doing it at night during maintenance hours, as the user will not be warned before the computer is rebooted. If you are planning on mass updating, you could move the ISOs out to a cloud host, such as azure's storage and download them from there instead, as to not murder the bandwidth on your CWA server. Download here: [Edit 2019-10-16] Updated to fix bug with follow up version check. [Edit 2019-10-17] Script now leaves notes in the script log for actions, instead of just emailing. Also, resend everything doesn't stop on failure. [Edit 2019-11-20] Added @johnduprey 's powershell to run the install and better check for failures.
  21. 2 points
    Following the MSPs who were impacted by this https://www.reddit.com/r/msp/comments/ani14t/local_msp_got_hacked_and_all_clients_cryptolocked/ a number MSPGeekers had an impromptu call to discuss security in general and what best practices we all followed to ensure our systems are as secured as possible. This prompted an idea from @MetaMSP that we have a place where best practices can be defined - things that we can put in place to make our RMMs as secure as possible. I will update this with a list of generally agreed upon methods based on the discussion. How can I apply better security? 1) Enable Multi-Factor Authentication. This is a functionality that already exists within Automate in the form of plugins, and for the effort to implement it gives a massive boost to security. As an MSP every single account you have should have 2FA on it 2) Do not publish your Automate URL publicly - anywhere. If you are linking to your Automate site, or even your Control site from anywhere on your website - remove it and ensure to the best of your ability it is removed from search engine indexes. Attackers can find servers like this on Google using very simple methods and you will be one of the first they attempt to attack. 3) Review all plugins/extensions installed and disable, remove the ones you no longer use. Having an added benefit of speeding your system up, each of these adds a small risk profile as you are relying on third party code being secure running in the background. Removing plugins you no longer use or need reduces the surface area of attack. 4) Review ports you have open and close ports that are not needed. You will find the ConnectWise documentation here on what ports should be open. https://docs.connectwise.com/ConnectWise_Automate/ConnectWise_Automate_Documentation/020/010/020 . Don't just assume this is right - check. Ports like 3306 (MySQL DB Port) and 12413 (File Redirector Service) should absolutely not be opened up externally. 5) Keep your Automate up to date. ConnectWise are constantly fixing security issues that are reported to them. You may think you are safe on that "old" version of Automate/LabTech, but in reality you are sitting on an out-of-date piece of software that is ripe for someone attacking 6) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears). <-- Courtesy of @MetaMSP 7) DO ensure that robots.txt is properly set on your Automate server. If you can Google your Automate server's hostname and get a result, this is BROKEN and should be fixed ASAP. <-- Courtesy of @MetaMSP 8 ) Firewall Blocking. I personally block every country other than the UK and the USA from my Automate Server on our external firewall. This greatly reduces your chance of being attacked out of places like China, Korea, Russia etc. 9) Frequently review the following at your MSP Check that the username and passwords set are secure, better yet randomise them all and use a password manager Treat vendors/services that don't support or allow 2FA with extreme prejudice. I will happily drop vendors/services that don't support this. If you 100% still need to keep them, setup a periodic review and pressure them secure their systems because you can almost guarantee if they are not doing customer logins properly that there will be other issues Setup a periodic review to audit users that are active on all systems. PSA, Office365, RMM, Documentation Systems (ITGlue/IT Boost) Audit 3rd Party Access, Consultants and Vendor access to your systems <-- Thanks @SteveIT 10) DON'T share credentials except in cases of absolute necessity (one login is available ONLY and you can't afford a single point of failure if that one person that knows it disappears).๏ปฟ <-- Courtesy of @MetaMSP
  22. 2 points
    Hello, @KyotoUK, hopefully what we've done may be of some use. Here's the rough breakdown of the script we use. (And we're using an EDF to track if we have or haven't added/updated the admin account.) Shell 'net user [username]', IF %shellresult% contains [username] then jump to part of script where we just use a batch file to update the password. IF %shellresult% doesn't, we need to create a new account. New account - Use the 'add user' script step, then a shell command for 'net localgroup Administrators [username] /add', and to make the password not expire we run a 'WMIC USERACCOUNT WHERE (Name='[username]' and Domain='%computername%') SET PasswordExpires=FALSE', set the tracking EDF to 1, then exit the script. Update password instead - Download batch file which basically just runs 'net user' to set the password. We use that WMIC command again just to reinforce the 'no expire' part. Then delete the batch file and set the tracking EDF to 1. Why the batch file? Because a 'net user' command and its results show up in the Commands list on a given machine, so folks who have access to Commands logs but shouldn't know the special password will get to see it. (Oh yes, use ECHO OFF in that batch file.) The EDF is just a toggle to say "yes, the special admin account has been taken care of." Then you can use its 0 or 1 status to do a search that populates a group. That group can be set to run the add-or-update-admin-user script every couple of hours. Once the script runs, the EDF/search will take care of removing the endpoint from the group.
  23. 2 points
    A page I made to new technicians know the various systems in CWA, or to reference when reading other documentation. https://dbeta.com/automateglossary/ I'm open to suggestions about improvements and additions. Specifically, I'm looking for recommended terms and related links. The more I fill this out, the better of a resource it is for new and old users alike. Specifically, if I get good related links in there, it also serves as a reference point for outside resources. Personally I end up searching for he same pages over and over again. I could just create bookmarks, but for some reason I don't.
  24. 2 points
    We've all come to the forums at some point looking for a way to get more specific results from a monitor than is available with the default tools and options in CWA, and have stumbled across posts detailing the complex and arcane process of making RAWSQL monitors. These are basically where you take all the builtin logic that monitors do behind the scenes and recreate it manually with a query. The advantages of this are that you can get much more specific with your queries to a degree simply not possible in a regular monitor. There are downsides to RAWSQL monitors though. They require a lot more work up front. Regular monitors do a lot in the background, like returning a bunch of info that ties the results to the computers detected, and CWA assumes this data is returned so a monitor that doesn't do this will not alert properly. This can be done manually, mostly by joining the agentcomputerdata table in the query and returning values from that. Even if you do this properly, some features like ignoring agents or group targeting (according to Darren) have to be done manually in the query rather than from the simple GUI. There are also issues with supportability, if you ask a support person to help you with anything even somewhat relating to a RAWSQL monitor they will laugh you right off the phone. So, what if you want to do something a little more complex than a regular monitor can handle, but don't want to deal with the atrocity that is RAWSQL? I would use what I like to call a LazySQL monitor. A LazySQL monitor is one in which you let the builtin monitor functions handle most of the busywork, and you use the additional conditions field to limit the results to computers returned from an SQL query or queries that do all the complex selections and limiting and whatnot. Basically, you make a monitor that by default catches every computer (I make it look for "computerid notequals 0"), I will attach an example of a monitor where I do just that so you can visualize it more easily. As you can see, the basic monitor configuration is very simple and would match all computers, then I do the more specific stuff, like returning computers that have patches missing and an EDF checked, as subqueries in the additional conditions field. This field basically just tacks on whatever is in it to the end of the SQL query put together by the monitor. For a simpler use case, let's take one I just finished explaining to a user in the slack channel. He wanted to put together a RAWSQL monitor to return all computers that didn't have a specific software installed, for our purposes let's say firefox. Instead of making a complex RAWSQL monitor to do this somewhat simple thing, he could use a LazySQL monitor. Using the same basic settings as the above picture, simply adding "AND computerid NOT IN(SELECT DISTINCT c.computerid FROM computers c JOIN software s ON c.computerid = s.computerid AND s.name LIKE "%firefox%")" as the additional condition field made the monitor limit it's results from all computers, to only those whose IDs were NOT returned by the subquery, which returns all those that do have firefox installed. This monitor will have greater supportability, greater functionality because it fits into CWA better, and took about 5 minutes to make and deploy. Please note that I am aware the last example returning computers without firefox could be accomplished easily with a regular monitor by using the invert check function. LazySQL monitors shine when you need to match a bunch of disparate criteria because it's easy to gather the computerids that match in a subquery and just check for "computerid not in (subquery)". Try not to nest a bunch of subqueries inside each other, if you can, because that can be slow. If you have any questions, you can always try asking me in the slack channel -Slartibartfast of Magrathea
  25. 2 points
    Hello everyone, new Automate user, and just joined the forums, so this is my first post. I have attempted to do my homework before posting by searching other Virus Scan related posts for key details and while I have found useful information I have not been able to resolve my issue. I run Cisco AMP for Endpoints. Currently on Connector version 7.0.5. Virus Scan does not detect that the AV product exists. I have followed the guide here, with no luck so far. I have applied the exclusions listed here. I found DarrenWhite99's post here and I believe part of the issue is that in step#3 the programs file is not being detected. **Edited** I have conducted further testing from the Agents Command Prompt and discovered that I cannot perform the DIR command and get any data back other than File Not Found. I tried pointing the command at a 7-Zip executable, and ended up with the same result. I am wondering if this isn't permissions related somehow. From the Computer Management screen I click the Wrench icon and open a Command Prompt. I then execute an ECHO and DIR command with the string from the Program Location definition entry. Example: ECHO {%HKLM\SYSTEM\ControlSet001\Services\CiscoAMP_7.0.5:ImagePath-%}\sfc.exe DIR {%HKLM\SYSTEM\ControlSet001\Services\CiscoAMP_7.0.5:ImagePath-%}\sfc.exe or ECHO %ProgramFiles%\Cisco\AMP\7.0.5\sfc.exe DIR %ProgramFiles%\Cisco\AMP\7.0.5\sfc.exe The ECHO returns the directory and file fine, but the DIR command comes up stating File Not Found. So I believe this is proof that the agent cannot evaluate the Program Location path, and that Cisco AMP is protecting itself from detection possibly. To remediate issues with step#3 I have white-listed Automate processes LTSVC.exe, LTSvcMon.exe, LTTray.exe within Cisco AMP. Despite having done this I perform the above mentioned commands again, and still my DIR command returns a value of File Not Found. I have two DEFs that I am testing and the first one uses the Registry path to determine the Program Location as follows: Prog Loc: {%HKLM\SYSTEM\ControlSet001\Services\CiscoAMP_7.0.5:ImagePath-%}\sfc.exe Def Loc: %ProgramFiles%\Cisco\AMP\tetra\versions.dat AP Process: sfc* Date Mask: (.*) OS Type: All OS's Version Check: {%HKLM\SYSTEM\ControlSet001\Services\CiscoAMP_7.0.5:ImagePath-%}\sfc.exe Second DEF for testing uses the Program Files path instead of the registry, as follows: Prog Loc: {%HKLM\SYSTEM\ControlSet001\Services\CiscoAMP_7.0.5:ImagePath-%}\sfc.exe Def Loc: %ProgramFiles%\Cisco\AMP\tetra\versions.dat AP Process: sfc* Date Mask: (.*) OS Type: All OS's Version Check: I have also substituted the actual file path for the Program Location in both tests: %ProgramFiles%\Cisco\AMP\7.0.5\sfc.exe Despite these settings the agent is not being detected. In my Computer Management screen the Antivirus tile shows "Not Installed". In addition I had the issues with Windows Defender 10 being populated every time and performed the export/import trick to reduce it's priority in the AV list. Any advice on getting this AV to populate properly?
  26. 2 points
    I have updated GitHub with Push-Automate in the Automate Functions.
  27. 2 points
    we corrected this awhile back by adding this to our drive space monitor. this is the additional condition for our 10 gig monitor Drives.Size > 16384 and Drives.Model not like '%USB%' and Drives.FileSystem not in ('CDFS','UNKFS','DVDFS','FAT','FAT32','NetFS') and Drives.free < 10240 and missing=0 and drives.internal != 0 and driveid not in () that should help as then its not going to even look at anything usb
  28. 2 points
    The major announcement we have for you today is about something thats been requested my multiple members in the community. WE ARE NOW SELLING MERCH ! That's right, we now have a store available for you to purchase items with our logo/designs! We have multiple designs on multiple products so check them out at the URL below. https://shop.spreadshirt.com/msp-geek/ Side Note:If you know someone who is good at graphic design, and would like to donate some images for further designs please feel free to reach out to us at admin@mspgeek.com! (Stolen directly from Kyle's Slack announcement)
  29. 2 points
    Hey guys, This is a quick ConnectWise Automate script I put together to update Dell SupportAssist, in response to: https://www.dell.com/support/article/ca/en/cadhs1/sln316857/dsa-2019-051-dell-supportassist-client-multiple-vulnerabilities?lang=en Several users on Slack have tried it with success. Enjoy! Duong_Dell - May 2019 - Dell Support Assist Vulnerability.xml
  30. 2 points
    This is one I made a couple months ago. It checks if teams is installed and if not will download directly from their site and installs. This way it is always the latest version that is installing. Install Microsoft Teams.xml
  31. 2 points

    Version 3.2.2

    272 downloads

    This solution will export customizations into a folder hierarchy based on each type of backup. It uses only Automate scripting functions so it is compatible with both Cloud Hosted and On-Prem servers. It is compatible with MySQL 5.6+ and Automate Version 11+. Script Backups will be placed in folders matching the script folders in your environment. Each time a script is exported, the last updated time and user information is included, providing multiple script revisions as it is changed over time. This script does not decode the scriptdata, so script dependencies like EDF's or other scripts will not be bundled in the XML export. But if you are just looking to undo a change, the script dependencies should exist already. Scriptlets will not be "versioned", but it will detect when they have changed and will only back up new or changed Scriptlets. Additionally, the following item types will also be backed up: Internal Monitors, Group Monitors, Remote Monitors, Dataviews, Role Detections, ExtraData Fields, and VirusScanners. The backups will be created at the folder identified by "@BackupRoot@", which you can provide as a script parameter when scheduling if you do not want to use the default path. Target the script against an online agent, and the script data will be backed up to that computer. Future runs will reference the saved "Script State" variable for that agent and will only include the scripts updated since the last successful backup. Backup verification is performed, if a script backup file was not created as expected, the backup timestamp will not be changed allowing the backup to be attempted again. The attached .zip bundle contains scripts actually backed up by this solution. Import the "Send Email" script first, and then import the "Backup" script. If there are any problems or you would rather import a script exported by Automate, the "Backup Automate Scripts (and More).xml" is included as well. You do not need to import all three files! Just schedule this to run daily against any agent to establish your script archive. script version revision archive backup
  32. 2 points
    Run this as a shell command in your script (do not use Shell as Admin, as any "as Admin" command runs without elevated permissions if UAC is enabled). @powershell -NoProfile -ExecutionPolicy bypass -Command "(iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))) >$null 2>&1" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
  33. 2 points
    @DarrenWhite99 and I were poking around in the exported script XMLs and I ended up making this library to handle decoding the XML. That library will turn an xml into json and back. It will also add each step's function definition and function text output, e.g. function 1, param1=1 turns into LTCommand: Update Agent. As part of this process, Darren suggested making a function document that contains every script function, it's parameters and associated function text. The live version of this reference document is available here: https://github.com/mspgeek/labtech-script-decode/blob/master/DOC.md This repository will be updated as needed as changes are made to the live version of Automate.
  34. 2 points
    The monitor will generate a new alert for every event it finds. If you are getting the alert over and over, then the event is being logged over and over. That said, typically an event alert that creates a ticket should update an open ticket with any additional events it finds. It sounds like the alert is not configured to create a ticket, it is sending an email. You need to open computer 1571, check out the remote monitors, and determine what is controlling this monitor, and what alert template it is configured to use. (It is group or plugin controlled, most likely). Then determine if the alert template needs to be reconfigured, or the monitor needs a different alert template assigned, etc. Then update the plugin/group configuration for the monitor. Is it also possible that this is being generated by an Internal Monitor, in which case the monitor "identity" field determines if two events are treated like the same thing or not. Normally you have to use a script to consolidate events into a single ticket for these monitors. There are some stock scripts, that are triggered by the stock monitors, for these kinds of events. "Monitor Drive Errors and Raid Failures* (117)", "Monitor Disk Blacklist Events - Informational* (246)", and "Monitor Disk Blacklist Events - Warnings and Errors* (308)". Maybe the monitor was changed from using these scripts to a custom alert template, and that is why you are getting multiple tickets now.
  35. 2 points
    I usually combat that by telling them that it should be an out of the box feature and I shouldn't be penalized for creating something that should already exist.
  36. 2 points
    { "annotations": { "list": [ { "builtIn": 1, "datasource": "-- Grafana --", "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "editable": true, "gnetId": null, "graphTooltip": 0, "hideControls": true, "id": 12, "links": [], "refresh": false, "rows": [ { "collapse": false, "height": "25px", "panels": [ { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "labtech", "editable": true, "error": false, "format": "none", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "id": 1, "interval": null, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "span": 4, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "tableColumn": "count(*)", "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT count(*) FROM labtech.computers", "rawQuery": true, "rawSql": "SELECT count(*) FROM labtech.computers", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" } ], "thresholds": "", "title": "Agents", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "labtech", "editable": true, "error": false, "format": "none", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "id": 9, "interval": null, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "span": 4, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "tableColumn": "COUNT(*)", "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) FROM computers WHERE $timeFilter", "rawQuery": true, "rawSql": "SELECT COUNT(*) FROM computers\r\nWHERE $__timeFilter(DateAdded)", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "table": "Computers", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "DateAdded", "timeColDataType": "DateAdded : datetime", "timeDataType": "datetime" } ], "thresholds": "", "title": "Agents Added", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "labtech", "editable": true, "error": false, "format": "none", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "id": 2, "interval": null, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "span": 4, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "tableColumn": "COUNT(*)", "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) FROM clients", "rawQuery": true, "rawSql": "SELECT COUNT(*) FROM clients", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" } ], "thresholds": "", "title": "Clients", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "labtech", "editable": true, "error": false, "format": "none", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "id": 14, "interval": null, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "span": 4, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "tableColumn": "Patch Attempts", "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT COUNT(DISTINCT computerid) AS 'Patch Attempts' FROM commands WHERE command = 100 AND `status` = 3 AND output LIKE '%downloaded and installed successfully%' AND dateupdated >=CURDATE()", "rawQuery": true, "rawSql": "SELECT COUNT(DISTINCT computerid) AS 'Patch Attempts' FROM commands WHERE command = 100 AND `status` = 3 AND output LIKE '%downloaded and installed successfully%' AND dateupdated >=CURDATE()", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "table": "Computers", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "DateAdded", "timeColDataType": "DateAdded : datetime", "timeDataType": "datetime" } ], "thresholds": "", "title": "Succesful patches today", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": false, "colorValue": false, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "datasource": "labtech", "editable": true, "error": false, "format": "none", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "id": 13, "interval": null, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "span": 4, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "tableColumn": "Patch Attempts", "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT COUNT(DISTINCT computerid) AS 'Patch Attempts' FROM commands WHERE command = 100 AND `status` = 3 AND dateupdated >=CURDATE()", "rawQuery": true, "rawSql": "SELECT COUNT(DISTINCT computerid) AS 'Patch Attempts' FROM commands WHERE command = 100 AND `status` = 3 AND dateupdated >=CURDATE()", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "table": "Computers", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "DateAdded", "timeColDataType": "DateAdded : datetime", "timeDataType": "datetime" } ], "thresholds": "", "title": "Patch attempts today", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" } ], "repeat": null, "repeatIteration": null, "repeatRowId": null, "showTitle": false, "title": "Agents & Clients", "titleSize": "h6" }, { "collapse": false, "height": "150px", "panels": [ { "aliasColors": { "Kristian": "#967302" }, "cacheTimeout": null, "combine": { "label": "Others", "threshold": 0 }, "datasource": "labtech", "editable": true, "error": false, "fontSize": "60%", "format": "short", "id": 11, "interval": null, "legend": { "percentage": true, "show": true, "sortDesc": true, "values": true }, "legendType": "Right side", "links": [], "maxDataPoints": 3, "nullPointMode": "connected", "pieType": "pie", "span": 4, "strokeWidth": "1", "targets": [ { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '4' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '4' \nAND $__timeFilter(TicketDataDate)", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '9' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '9' \nAND $__timeFilter(TicketDataDate)", "refId": "B", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '10' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '24' \nAND $__timeFilter(TicketDataDate)", "refId": "C", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '14' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '33' \nAND $__timeFilter(TicketDataDate)\n", "refId": "D", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '24' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '44' \nAND $__timeFilter(TicketDataDate)\n", "refId": "E", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '33' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '42' \nAND $__timeFilter(TicketDataDate)", "refId": "F", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '35' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT \ncount(TicketID) As value,\n'xxx' as metric,\nIFNULL(UNIX_TIMESTAMP(TicketDataDate), UNIX_TIMESTAMP(NOW())) AS time_sec\nFROM labtech.ticketdata \nWHERE DataType = '6' AND UserID = '21' \nAND $__timeFilter(TicketDataDate)\n", "refId": "G", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": true, "query": "SELECT count(TicketID) As 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '44' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT\n UNIX_TIMESTAMP(<time_column>) as time_sec,\n <value column> as value,\n <series name column> as metric\nFROM <table name>\nWHERE $__timeFilter(time_column)\nORDER BY <time_column> ASC\n", "refId": "H", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": true, "query": "SELECT count(TicketID) as 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '43' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT\n UNIX_TIMESTAMP(<time_column>) as time_sec,\n <value column> as value,\n <series name column> as metric\nFROM <table name>\nWHERE $__timeFilter(time_column)\nORDER BY <time_column> ASC\n", "refId": "I", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": true, "query": "SELECT count(TicketID) as 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '42' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT\n UNIX_TIMESTAMP(<time_column>) as time_sec,\n <value column> as value,\n <series name column> as metric\nFROM <table name>\nWHERE $__timeFilter(time_column)\nORDER BY <time_column> ASC\n", "refId": "J", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": true, "query": "SELECT count(TicketID) as 'xxx' FROM labtech.ticketdata WHERE DataType = '6' AND UserID = '21' AND $timeFilter ORDER BY count(TicketID)", "rawQuery": true, "rawSql": "SELECT\n UNIX_TIMESTAMP(<time_column>) as time_sec,\n <value column> as value,\n <series name column> as metric\nFROM <table name>\nWHERE $__timeFilter(time_column)\nORDER BY <time_column> ASC\n", "refId": "K", "resultFormat": "time_series", "schema": "labtech", "table": "ticketdata", "tags": [ { "key": "DataType", "operator": "=", "value": "6" }, { "condition": "AND", "key": "UserID", "operator": "=", "value": "4" } ], "targetLists": [ [ { "params": [ "TicketID" ], "type": "field" }, { "params": [], "type": "count" } ] ], "timeCol": "TicketDataDate", "timeColDataType": "TicketDataDate : timestamp", "timeDataType": "timestamp" } ], "title": "Tickets Finished", "transparent": false, "type": "grafana-piechart-panel", "valueName": "current" }, { "aliasColors": {}, "cacheTimeout": null, "combine": { "label": "Others", "threshold": 0 }, "datasource": "labtech", "editable": true, "error": false, "fontSize": "80%", "format": "short", "id": 6, "interval": null, "legend": { "percentage": true, "show": true, "values": true }, "legendType": "Right side", "links": [], "maxDataPoints": 3, "nullPointMode": "connected", "pieType": "pie", "span": 4, "strokeWidth": "1", "targets": [ { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) as \"New\" FROM tickets WHERE STATUS = 1", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'New' as metric\nFROM tickets \nWHERE STATUS = 1\nAND $__timeFilter(StartedDate)", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "format": "time_series", "hide": false, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Open' as metric\nFROM tickets \nWHERE STATUS = 2\nAND $__timeFilter(StartedDate)", "refId": "B" }, { "alias": "", "format": "time_series", "hide": false, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Stalled' as metric\nFROM tickets \nWHERE STATUS = 3\nAND $__timeFilter(StartedDate)", "refId": "C" } ], "timeShift": null, "title": "Tickets", "transparent": false, "type": "grafana-piechart-panel", "valueName": "current" }, { "aliasColors": { "Geen": "#CFFAFF", "Prio 2 - Critical": "#BF1B00", "Prio 4 - Normal": "#508642", "Prio 5 - Low": "#967302", "automated": "#447EBC" }, "cacheTimeout": null, "combine": { "label": "Others", "threshold": 0 }, "datasource": "labtech", "editable": true, "error": false, "fontSize": "80%", "format": "short", "hideTimeOverride": false, "id": 10, "interval": null, "legend": { "percentage": true, "show": true, "sortDesc": true, "values": true }, "legendType": "Right side", "links": [], "maxDataPoints": 3, "nullPointMode": "connected", "pieType": "pie", "span": 4, "strokeWidth": 1, "targets": [ { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) AS Geen FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 2)", "rawQuery": true, "rawSql": "SELECT \r\n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\r\n COUNT(*) AS VALUE,\r\n 'Geen' AS metric\r\nFROM tickets \r\nWHERE STATUS IN (1,2,3) AND (Priority = 2)", "refId": "A", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) AS 'Prio 5 - Low' FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 4)", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Prio 5 - Low' as metric\nFROM tickets\nWHERE STATUS IN (1,2,3) AND (Priority = 4)\nAND $__timeFilter(StartedDate)", "refId": "B", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "query": "SELECT COUNT(*) AS 'Prio 4 - Normal' FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 10)", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Prio 4 - Normal' as metric\nFROM tickets\nWHERE STATUS IN (1,2,3) AND (Priority = 10)\nAND $__timeFilter(StartedDate)", "refId": "C", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) AS 'Prio 3 - High' FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 14)", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Prio 3 - High' as metric\nFROM tickets\nWHERE STATUS IN (1,2,3) AND (Priority = 14)\nAND $__timeFilter(StartedDate)\n", "refId": "D", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) AS 'Prio 1 - Emergency' FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 19)", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Prio 1 - Emergency' as metric\nFROM tickets\nWHERE STATUS IN (1,2,3) AND (Priority = 19)\nAND $__timeFilter(StartedDate)\n", "refId": "E", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) AS 'Prio 2 - Critical' FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 17)", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Prio 2 - Critical' as metric\nFROM tickets\nWHERE STATUS IN (1,2,3) AND (Priority = 17)\nAND $__timeFilter(StartedDate)\n", "refId": "F", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" }, { "alias": "", "dsType": "sqldb", "format": "time_series", "groupBy": [], "hide": false, "query": "SELECT COUNT(*) AS 'automated' FROM tickets WHERE STATUS IN (1,2,3) AND (Priority = 5)", "rawQuery": true, "rawSql": "SELECT \n IFNULL(UNIX_TIMESTAMP(StartedDate), UNIX_TIMESTAMP(NOW())) AS time_sec,\n COUNT(*) as value,\n 'Automatec' as metric\nFROM tickets\nWHERE STATUS IN (1,2,3) AND (Priority = 5)\nAND $__timeFilter(StartedDate)", "refId": "G", "resultFormat": "time_series", "schema": "labtech", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" } ], "title": "Ticket Priority", "transparent": false, "type": "grafana-piechart-panel", "valueName": "current" }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "labtech", "decimals": 0, "editable": true, "error": false, "fill": 1, "grid": {}, "height": "200", "hideTimeOverride": false, "id": 5, "interval": "60s", "legend": { "alignAsTable": false, "avg": false, "current": false, "hideEmpty": false, "hideZero": false, "max": false, "min": false, "rightSide": false, "show": true, "sideWidth": 25, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [ { "type": "dashboard" } ], "nullPointMode": "connected", "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "Tickets Closed", "yaxis": 1 } ], "spaceLength": 10, "span": 10, "stack": false, "steppedLine": false, "targets": [ { "alias": "", "format": "time_series", "hide": true, "rawSql": "SELECT\r\n UNIX_TIMESTAMP(StartedDate) AS time_sec,\r\n COUNT(TicketID) AS value,\r\n \"Tickets Created\" AS metric\r\nFROM labtech.tickets\r\nWHERE STATUS IN (1,2)\r\nAND $__timeFilter(StartedDate)\r\nGROUP BY UNIX_TIMESTAMP(StartedDate)\r\nORDER BY StartedDate ASC\r\n", "refId": "B" }, { "alias": "", "format": "time_series", "hide": true, "rawSql": "SELECT\n UNIX_TIMESTAMP(UpdateDate) AS time_sec,\n COUNT(TicketID) AS value,\n \"Tickets Closed\" AS metric\nFROM labtech.tickets\nWHERE STATUS = 4\nAND $__timeFilter(UpdateDate)\nGROUP BY UNIX_TIMESTAMP(UpdateDate)\nORDER BY UpdateDate ASC\n", "refId": "D" }, { "alias": "", "format": "time_series", "hide": false, "rawSql": "CALL generate_series_date_minute_base($__timeFrom(), $__timeTo(), 5);", "refId": "A" }, { "alias": "", "format": "time_series", "rawSql": "SELECT UNIX_TIMESTAMP(series_tmp.series) as time_sec, IFNULL(tmp.Cnt,0) as value, \"Tickets Created\" AS metric FROM series_tmp\nLEFT JOIN (\nSELECT COUNT(*) AS Cnt, FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP(tickets.StartedDate)/300)*300) AS datum FROM labtech.tickets\nWHERE tickets.STATUS IN (1,2)\nGROUP BY 2\n) tmp ON tmp.Datum = series_tmp.series\nGROUP BY series_tmp.series;\n", "refId": "C" }, { "alias": "", "format": "time_series", "rawSql": "SELECT UNIX_TIMESTAMP(series_tmp.series) as time_sec, IFNULL(tmp.Cnt,0) as value, \"Tickets Closed\" AS metric FROM series_tmp\nLEFT JOIN (\nSELECT COUNT(*) AS Cnt, FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP(tickets.StartedDate)/300)*300) AS datum FROM labtech.tickets\nWHERE tickets.STATUS = 4\nGROUP BY 2\n) tmp ON tmp.Datum = series_tmp.series\nGROUP BY series_tmp.series;\n", "refId": "E" }, { "alias": "", "format": "time_series", "rawSql": "SELECT UNIX_TIMESTAMP(series_tmp.series) as time_sec, IFNULL(tmp.Cnt,0) as value, \"Tickets Combined\" AS metric FROM series_tmp\nLEFT JOIN (\nSELECT COUNT(*) AS Cnt, FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP(tickets.StartedDate)/300)*300) AS datum FROM labtech.tickets\nWHERE tickets.STATUS = 6\nGROUP BY 2\n) tmp ON tmp.Datum = series_tmp.series\nGROUP BY series_tmp.series;\n", "refId": "F" } ], "thresholds": [ { "colorMode": "custom", "fill": true, "fillColor": "rgb(248, 214, 110)", "line": true, "lineColor": "rgb(233, 188, 188)", "op": "lt", "value": null } ], "timeFrom": null, "timeShift": null, "title": "Tickets", "tooltip": { "msResolution": true, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": false } ] }, { "columns": [], "datasource": "labtech", "editable": true, "error": false, "fontSize": "80%", "id": 12, "links": [], "pageSize": 200, "scroll": false, "showHeader": true, "sort": { "col": 1, "desc": true }, "span": 2, "styles": [ { "colorMode": "row", "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "dateFormat": "YYYY-MM-DD HH:mm:ss", "decimals": 1, "pattern": "/.*/", "thresholds": [ "3", "6" ], "type": "number", "unit": "short" } ], "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT Users.Name AS Engineer, ((SUM(hours)*60)+SUM(mins))/60 AS Hours FROM (timeslips LEFT JOIN timecategory ON timecategory.id=timeslips.category) LEFT JOIN users ON users.userid=timeslips.userid WHERE (timeslips.Date > CURDATE()) GROUP BY timeslips.userid ORDER BY Hours DESC", "rawQuery": true, "rawSql": "SELECT Users.Name AS Engineer, ((SUM(hours)*60)+SUM(mins))/60 AS hours\nFROM (timeslips LEFT JOIN timecategory ON timecategory.id=timeslips.category)\nLEFT JOIN users ON users.userid=timeslips.userid\nWHERE (timeslips.Date > CURDATE()) \nGROUP BY timeslips.userid \nORDER BY Hours DESC", "refId": "A", "resultFormat": "table", "schema": "labtech", "table": "timeslips", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" } ], "title": "Labtech Timeslips", "transform": "table", "type": "table" } ], "repeat": null, "repeatIteration": null, "repeatRowId": null, "showTitle": false, "title": "Tickets", "titleSize": "h6" }, { "collapse": false, "height": "450px", "panels": [ { "columns": [], "datasource": "labtech", "editable": true, "error": false, "fontSize": "80%", "hideTimeOverride": true, "id": 7, "links": [], "pageSize": null, "scroll": true, "showHeader": true, "sort": { "col": 6, "desc": true }, "span": 6, "styles": [ { "colorMode": null, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "dateFormat": "YYYY-MM-DD HH:mm:ss", "decimals": 0, "pattern": "TicketID", "sanitize": false, "thresholds": [], "type": "string", "unit": "short" }, { "colorMode": "cell", "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "dateFormat": "YYYY-MM-DD HH:mm:ss", "decimals": 0, "pattern": "Looptijd", "thresholds": [ "8", "16" ], "type": "number", "unit": "none" } ], "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT Tickets.TicketID , Users.Name AS Engineer, Clients.Name AS CLIENT , TicketPriority.Name AS Priority, infocategory.CategoryName AS Categorie, Tickets.Subject, IF (elapsed_working_hours(TD1.TicketDataDate, TD3.TicketDataDate) IS NULL, elapsed_working_hours(TD1.TicketDataDate,NOW()), elapsed_working_hours(TD1.TicketDataDate, TD3.TicketDataDate)) AS Looptijd FROM Tickets LEFT JOIN TicketData TD1 ON TD1.TicketDataID = (SELECT MIN(TD.TicketDataID) FROM TicketData TD WHERE (Tickets.TicketID = TD.TicketID) AND (TD.DataType = 1)) LEFT JOIN TicketData TD3 ON TD3.TicketDataID = (SELECT MAX(TD.TicketDataID) FROM TicketData TD WHERE (Tickets.TicketID = TD.TicketID) AND (TD.DataType = 6)) LEFT JOIN Users ON Tickets.userID = Users.UserID LEFT JOIN Clients ON Clients.ClientID = Tickets.ClientID LEFT JOIN TicketPriority ON TicketPriority.Priority = Tickets.Priority LEFT JOIN infocategory ON infocategory.ID = Tickets.Category WHERE (Tickets.Status IN (2,3)) AND (infocategory.id IN (159, 160, 165, 166)) ORDER BY Looptijd DESC LIMIT 5", "rawQuery": true, "rawSql": "SELECT Tickets.TicketID , Users.Name AS Engineer, Clients.Name AS CLIENT , TicketPriority.Name AS Priority, infocategory.CategoryName AS Categorie, Tickets.Subject, IF (elapsed_working_hours(TD1.TicketDataDate, TD3.TicketDataDate) IS NULL, elapsed_working_hours(TD1.TicketDataDate,NOW()), elapsed_working_hours(TD1.TicketDataDate, TD3.TicketDataDate)) AS Looptijd FROM Tickets LEFT JOIN TicketData TD1 ON TD1.TicketDataID = (SELECT MIN(TD.TicketDataID) FROM TicketData TD WHERE (Tickets.TicketID = TD.TicketID) AND (TD.DataType = 1)) LEFT JOIN TicketData TD3 ON TD3.TicketDataID = (SELECT MAX(TD.TicketDataID) FROM TicketData TD WHERE (Tickets.TicketID = TD.TicketID) AND (TD.DataType = 6)) LEFT JOIN Users ON Tickets.userID = Users.UserID LEFT JOIN Clients ON Clients.ClientID = Tickets.ClientID LEFT JOIN TicketPriority ON TicketPriority.Priority = Tickets.Priority LEFT JOIN infocategory ON infocategory.ID = Tickets.Category WHERE (Tickets.Status IN (2,3)) AND (infocategory.id IN (159, 160, 165, 166)) ORDER BY Looptijd DESC LIMIT 5", "refId": "A", "resultFormat": "table", "schema": "labtech", "table": "tickets", "tags": [], "targetLists": [ [ { "params": [ "*" ], "type": "field" } ] ], "timeCol": "time", "timeColDataType": "time : type", "timeDataType": "type" } ], "title": "Lopende support tickets", "transform": "table", "transparent": true, "type": "table" }, { "columns": [], "datasource": "labtech", "editable": true, "error": false, "fontSize": "100%", "id": 8, "links": [], "pageSize": null, "scroll": true, "showHeader": true, "sort": { "col": 3, "desc": false }, "span": 6, "styles": [ { "colorMode": null, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "dateFormat": "YYYY-MM-DD HH:mm:ss", "decimals": 2, "pattern": "computerid", "sanitize": false, "thresholds": [], "type": "string", "unit": "short" }, { "colorMode": null, "colors": [ "rgba(245, 54, 54, 0.9)", "rgba(237, 129, 40, 0.89)", "rgba(50, 172, 45, 0.97)" ], "dateFormat": "YYYY-MM-DD HH:mm:ss", "decimals": 2, "pattern": "Last contact", "thresholds": [], "type": "date", "unit": "short" } ], "targets": [ { "alias": "", "dsType": "sqldb", "format": "table", "groupBy": [], "hide": false, "query": "SELECT computers.computerid,computers.Name AS ComputerName,CONVERT(CONCAT(clients.name,' - ',locations.name) USING utf8) AS Location, computers.`LastContact` AS 'Last contact' FROM (computers LEFT JOIN Locations ON Locations.LocationID=computers.Locationid) LEFT JOIN Clients ON Clients.ClientID=Computers.clientid JOIN AgentComputerData ON Computers.ComputerID=AgentComputerData.ComputerID WHERE computers.`LastContact` < DATE_ADD(NOW(),INTERVAL -7 MINUTE) AND ((Computers.OS LIKE '%server%' OR Computers.OS LIKE '%linux%' )) ORDER BY LastContact DESC", "rawQuery": true, "rawSql": "SELECT computers.computerid,computers.Name AS ComputerName,CONVERT(CONCAT(clients.name,' - ',locations.name) USING utf8) AS Location, computers.`LastContact` AS 'Last contact' FROM (computers LEFT JOIN Locations ON Locations.LocationID=computers.Locationid) LEFT JOIN Clients ON Clients.ClientID=Computers.clientid JOIN AgentComputerData ON Computers.ComputerID=AgentComputerData.ComputerID WHERE computers.`LastContact` < DATE_ADD(NOW(),INTERVAL -7 MINUTE) AND ((Computers.OS LIKE '%server%' OR Computers.OS LIKE '%linux%' )) ORDER BY LastContact DESC", "refId": "A", "resultFormat": "table", "schema": "labtech", "table": "computers", "tags": [ { "key": "LastContact", "operator": "<", "value": "date_add(now(),interval -7 minute)" }, { "condition": "AND", "key": "OS", "operator": "=", "value": "%server%" } ], "targetLists": [ [ { "params": [ "Name" ], "type": "field" } ] ], "timeCol": "LastContact", "timeColDataType": "LastContact : datetime", "timeDataType": "datetime" } ], "title": "Offline servers", "transform": "table", "transparent": true, "type": "table" } ], "repeat": null, "repeatIteration": null, "repeatRowId": null, "showTitle": false, "title": "Lopende Tickets + Offline Server", "titleSize": "h6" }, { "collapse": false, "height": "250px", "panels": [], "repeat": null, "repeatIteration": null, "repeatRowId": null, "showTitle": false, "title": "New row", "titleSize": "h6" } ], "schemaVersion": 14, "style": "dark", "tags": [], "templating": { "list": [] }, "time": { "from": "now/d", "to": "now" }, "timepicker": { "refresh_intervals": [ "5m", "15m", "30m", "1h" ], "time_options": [ "5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d" ] }, "timezone": "browser", "title": "Labtech", "version": 21 }
  37. 2 points
    The horribleness that is the new Agent UI Script tile has made Script Log reading a painful experience, but it started a conversation about combining SCRIPT LOG entries when possible. One method would be to defer logging, using a variable to accumulate information and then logging it all at once. Another would be to call your own SQL to append information into an existing entry. I believe this script is superior to both methods. With this script you would use the SCRIPT LOG step as normal. Each time it is used, a new entry will be recorded as normal, no special treatment of logging is needed. At the end of your script you just call this function script;. This script will combine all the script log lines recorded on this computer under the current script id, since the start of this script into 1 entry and delete the other entries. The combined result will also be returned in a variable named "@SCRIPTLOGS@" in case you wanted to email them, attach them to a ticket, etc. Download Here: FUNCTION - Consolidate Script Log Entries.zip Here is an example before the script is called, with the individual SCRIPT LOG entries. Here is the Script Log entry after running. Thank you @johnduprey for the work you did on this idea, which inspired me to create and share this! combine merge join bundle multiple script log logs logging message messages entries scriptlog scriptlogs entry
  38. 2 points
    Dell: I've got Dell Command Configure automatically configuring the BIOS to turn on all Dell desktop workstations at 11PM. I've been using this for a few years to help with stubborn users that think they should power off their machines at night. Helps with after-hours maintenance tasks. The Automate side of it is pretty simple, search that finds Dell workstations that aren't portable tied to an auto-join group. Then a check to see if the software is already installed, run the install if it's not, then regardless if it's installed I run the command line version of the software cctk.exe and pass it arguments. The bit that took a little research was the arguments but the documentation of the utility is quite good. https://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverid=2krxv Silent install: %windir%\System32\msiexec.exe /i %windir%\LTSvc\packages\DellCCTK\CommandConfigure.msi /qn "%ProgramFiles(x86)%\Dell\Command Configure\X86_64\cctk.exe" --autoon=everyday "%ProgramFiles(x86)%\Dell\Command Configure\X86_64\cctk.exe" --autoonhr=23 "%ProgramFiles(x86)%\Dell\Command Configure\X86_64\cctk.exe" --autoonmn=0 You'll need logic in there to run the X86 version instead when needed, it's the same path except "...\X86\cctk.exe" Lenovo: Get the attached copied to the machines and then here are the commands to run as Shell function: cscript.exe %windir%\LTSvc\packages\LenovoWMI\setconfig.vbs "Wake Up on Alarm" "Daily Event" cscript.exe %windir%\LTSvc\packages\LenovoWMI\setconfig.vbs "Alarm Time (HH:MM:SS)" "11:00:00" cscript.exe %windir%\LTSvc\packages\LenovoWMI\setconfig.vbs "After Power Loss" "Power On" cscript.exe %windir%\LTSvc\packages\LenovoWMI\listall.vbs > %windir%\LTSvc\packages\LenovoWMI\listall.txt HP: Get the attached files copied, use whatever logic you want to deal with 32 vs 64 bit, and change the commands for the 32 bit one. You'll notice that there are two entries for the power on time - at some point HP changed the syntax in the damn config file. Why? Who knows. I stumbled on this and I'm not promising that there aren't other things that they changed for no reason at all. This has worked for me though. %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"After Power Loss","Power On" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"BIOS Power-On Time (hh:mm)","23:00" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"BIOS Power-On Hour","23" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Sunday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Monday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Tuesday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Wednesday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Thursday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Friday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Saturday","Enable" %windir%\LTSvc\packages\HPBCU\BiosConfigUtility64.exe /setvalue:"Wake On LAN","Boot to Hard Drive" HPBCU.zip LenovoWMI.zip
  39. 2 points

    Version 1.2.1

    344 downloads

    This bundle contains separate SQL imports to add the following Scriptlets: Check If Agent Is Online - Just checks if the agent has a recent contact timestamp, and branches based on the result. Check Script Execution Delay Before Continuing - Define how much time can pass between a script's SCHEDULED start time and it's actual start. For example: Allow a script to run only within 4 hours of the time it was scheduled. Check Script Execution Window Before Continuing - Define a start time and end time. If the script begins running outside of this window it will exit. The time is compensated for the agents timezone and supports wrapping past midnight. (Start at 23:00 and End at 03:00 is understood as between 11:00PM and 3:00AM) Confirm Contract Status - Checks that an Agent is Under MSP Contract, and is not Excluded From MSP Contract. Example Loop - A simple loop (Like in the Scripting Lab) to execute a loop repeatedly until it reaches the specified limit. Example SQL Loop - A more complex loop that uses the SQL Get Dataset function to retrieve a recordset and loop through the results. Exit If Not Windows OS - Basic 2 line script starter to abort if the agent is not running a windows OS. Lookup Password Credentials - Allows you to retrieve the Title, Username, Password, URL, Notes, and ExpireDate for a password by providing the ID. Includes a query to find the "best" user matching a name ranked by location and expiration date, or use your own logic to choose the ID. Pause Script Until Remote Commands are Retrieved - A OS Neutral command that guarantees that the script will halt until the agent picks up the command and returns the result. Useful for pacing some operations that can fail if executed out of order. In times past, not all remote commands would pause the script. A "Create Folder" and a "Copy File" command could be scripted one after the other, but both could end up queued at the same time, and the agent might start the copy before the folder has been created. Also useful after executing an agent restart or service restart, ensures that the agent is checking in again. The Script Wait allows additional time to pass before continuing the script execution. Process Text One Line at a Time - Takes a variable with multiple lines of text (like output from a command) and loops over it one line at a time (like the SQL Loop does with rows). Run Temporary Batch or Shell Script - Largely replaced by the Execute Script function for Windows, but still useful for OSX/Linux. (And still works fine with Windows). This scriptlet generates a random filename, writes the script contents to the file, executes and then removes the script from the agent. To import a scriptlet, just use the Tools->Import->SQL function in Control Center (Or load the file contents into a Query Editor in SQLYog). To use the "Run Temporary Batch or Shell Script" refer to the screenshots below showing how to edit for Windows or OSX/Linux use. (Or both in one script if you are supporting multiple OS types!) Windows Batch Script: OSX/Linux Shell Script:
  40. 2 points

    Version 1.0.1

    257 downloads

    This Dataview is basically the same as the "Computer Status" Dataview, but I have added a column for Agent Idle Time. I find it helpful when I need to see quickly which users are on their systems, and which machines are not being used or have long idle times so that I can work without disrupting an active user. I have added another column, `Agent Additional Users`. This shows any other logins reported by Automate, such as on a Terminal Server. For only a couple of columns difference, I have found it to be a very useful dataview and refer to it often. To import, just extract the .sql file from the zip. In Control Center select Tools->Import->SQL File. It will ask if you want to import 3 statements. This is normal. When finished, select Tools->Reload Cache.
  41. 2 points
    @ZenTekDS Here is a copy. It should import into a folder called "Partner Scripts". Thorough Disk Cleaner v2.5.zip
  42. 2 points
    I completely agree, which is precisely why there should be a proper structure in place for reporting security vulnerabilities. Losing vulnerabilities because a support ticket got closed because a partner didn't respond is serious amateur hour stuff. This is also the second time I know of it has happened (one of my privately reported ones got lost in the same way, mostly because the initial support engineer could not comprehend what I was trying to raise). I implore ConnectWise to put a proper procedure in place for reporting security vulnerabilities allowing for responsible disclosure. In the mean time at least train the existing staff to escalate anything like this immediately to the appropriate resource.
  43. 2 points
    I have this information in a Remote monitor. I cover how to setup remote monitors here: https://gavsto.com/remote-monitor-trigger-an-alert-when-a-profile-goes-above-a-certain-size-including-setup-tips-for-remote-monitors/ But instead run this in the Executable / Arguments: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -command "& {(gp -path \"HKLM:\SOFTWARE\Microsoft\Internet Explorer\").svcVersion}" You can then filter out the data using the monitors: You can filter by Client Name in the same windows as well as export to Excel using the options dropdown in the same Window.
  44. 2 points
    At START: Variable Set: SQL Query: StartTimeValue=[SELECT NOW()+0;] At END: Variable Set: SQL Query: ElapsedMinutes=[SELECT TIMESTAMPDIFF(MINUTE,TIMESTAMP('@StartTimeValue@'),NOW());] I haven't tested that, but these are the important parts. SELECT NOW() in SQLYog or something will return a SQL Date string like '2018-05-30 10:11:59'. If you return a time/date stamp using SELECT NOW() and assign to a script variable, the script engine turns it into a string formatted according to the system locale like '5/30/2018 10:11:59 AM'. But this string does not conform to a SQL time stamp string and so you cannot use it to represent a time later in SQL queries. SELECT NOW()+0 returns a numeric value = YYYYMMDDhhmmss which is left alone by the engine. So TIMESTAMP('20180530101159') would return a valid timestamp that can be used with other time/date functions, such as TIMESTAMPDIFF(). See https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestampdiff for more details. (FWIW, the returned value from NOW()+0 is a decimal, but the decimal portion is 0. I sometimes use SELECT ROUND(NOW()+0) to throw away the decimal part explicitly, but I use SELECT NOW()+0 also...) Another way to do it using only steps at the end or a function script is: Variable Set: SQL Query: ElapsedMinutes=[SELECT TIMESTAMPDIFF(MINUTE,Executed,NOW()) FROM RunningScripts WHERE ThreadID='%ThreadID%';] Runningscripts.Executed will be set when the script actually begins executing (not just when it is waiting to run), so it works as a StartTimeValue reference. A reason you would want to specifically save a start time as a variable would be if your script was calling another script on a delay and you wanted to count to entire time period. The value would be passed to the other script, where runningscripts.executed would only have the start time that the last script began to execute. When you run scripts with a delay of 0, they are in the same thread and they don't have their own "executed" timestamp.
  45. 2 points

    Version 1.0.1

    191 downloads

    This Dataview incorporates some data from the Disk Inventory Dataview and the Computer Chassis Dataview, including Video Card, Memory, and CPU columns.. I created it when a client asked for assistance gathering this information. The first time we exported from multiple Dataviews and merged the columns in Excel. The next time they asked I made this so that 1 Dataview had all the columns that were important in this instance. Perhaps you would like these columns in a single Dataview also. The SQL is safe to import through Control Center->Tools->Import SQL. The Dataview should be added into the Inventory folder.
  46. 2 points
    Here's the latest version of this script that we've now used successfully on over 100 systems. This one is pushing build 1709 but you'll notice you need to edit it to provide the path to your server hosting the ISO. I was a little lazy this time and just exported one script with all of the other ones embedded in the export. When you import this, you'll get the following scripts: - Upgrade Win 10 (From Tray Icon) - ERG: Script designed to be added to your tray icon to let users start the install for themselves - Remove Duplicate Scripts in Queue: A function script I wrote to avoid having the same script queued multiple times for an agent - Upgrade Windows 10 - ERG: The main show. This is what you run manually if you want to schedule the script for an agent yourself. - BITS Download - ERG: Helper function script to handle downloading the ISO using BITS - BITS Status - ERGTEST: Ignore this. Not really sure why it's here - BITS Status - ERG: Helper function to the helper function script above. I know @DarrenWhite99 says you don't need to do it this way, but I had issues the 'official' way and this has been working for us. - FUNCTION - Email Results to Technician*: @DarrenWhite99's amazing function script for emailing whoever kicked off the script giving status updates I don't recall having any failure to reboot issues with this version. NOTE: I've removed any function for upgrading from prior versions of Windows (7 and 8). This is strictly for doing build-to-build upgrades of Windows 10 systems now. Upgrade Win10-ERG.zip
  47. 2 points
    This document is meant as a successor to and replacement for "How I got to 95% patch efficacy in Eight Easy(?) Steps", mainly because about half of it is obsolete, but also because "efficacy" wasn't used correctly in that context and I know better now. Mostly. My hope is to make this more of a living document. I'd like to update this original front post as changes to it are needed rather than force the reader to descend into seven pages of comments to find the most up-to-date solution for any given problem. DISCLAIMER: I'm still running the "Classic" Patch Manager, based on the advice of so many other Geeks who advise that the LT11 Patch Manager is still problematic as of this writing. Either way, this discussion focuses mostly on the functionality of Windows itself relative to how LabTech delivers updates, so a large portion of this document should remain fairly universal. OTHER DISCLAIMER: I am not responsible for what happens when you follow my advice. Nope. Finally, these are mainly descriptions of what I've found to be useful coupled with a discussion about how Windows Updates actually work. There isn't much meat here on Windows 10, mainly because right now it patches pretty good, and the problem of delivering whole operating system upgrades is slightly beyond the scope of this discussion (for now). Let's dive in. I. Master the Windows Update Agent Version. Put simply, the Windows Update Agent is the collection of code that uses Microsoft Update to check the patch status of the OS versus available updates from Microsoft, as well as drive update scheduling and UI behavior. Between the time when the Win7 and Win8/8.1 kernels were released and now, the WUA has changed tremendously to the point where the original WUA code shipped with the OS will no longer effectively patch a system in a reasonable amount of time if you try to use it today. You may find that you get no useful data, a failed hotfix inventory, a pegged CPU, and/or other unpleasant behavior. This means it's imperative that you track the WUA of your endpoints and make sure they're up-to-date, especially when you onboard new clients that come from questionable patching practices. The most common way to do this is simply to set up an EDF for each computer and have a script populate it. Automate 11's Patch Manager has this facility built-in. Once you have that information, you can take action on bringing the Windows Update Agent up-to-date for the various operating systems in question. The WUA version of any Windows system after XP/2003 can be found by checking the Product Version of the wuaueng.dll file in System32. This can be accomplished with a one-line PowerShell command: (Get-Item 'C:\Windows\system32\wuaueng.dll').VersionInfo.ProductVersion This gets you a collection of WUAs that correlate to an operating system. Now what? There are a million ways to maintain your WUA versions. I have it on good authority that Cubert's Patch Remedy plugin is pretty sweet. That said, here's how I do it: Since the actual object of the game is to get your targets to the point where they're reliably checking for and delivering updates, I've found that it's not necessarily important to get any given endpoint up to the very latest WUA to get them to patch. What I do is to figure out the minimum WUA version needed for each OS, get my target to that point, and then let the built-in patching mechanisms take over and figure out the rest for itself. To do this, I've set up multiple Properties (Dashboard -> Config -> Configurations -> Properties) to track the minimum WUA required for reliable update delivery, so I have a single place to update this value for any script that uses it (did you know that Properties in Automate are extensible? Holy cow!). Then, I have a WUA Validator script that runs before any patching-related action which checks the endpoint's WUA version against the relative hard-set minimum property, and sets a variable to PASS / FAIL on whether or not a patching agent may go forward. Any script that wants to patch calls the Validator, checks the variable, and if it gets a FAIL, it doesn't perform the patching command. I've turned off my built-in Hotfix Inventory schedule for desktops and laptops. I throw scripts at them instead. And those scripts run the WUA Validator first. This has been alarmingly successful for me. So how the heck to you validate a damn version number? It's not like you can use a math compare, since versions are strings with multiple dots in them. If you're at 7.6.7601.19161 (horrid), and you wanna be at 7.6.7601.23453 (phew!), you have to basically do an alpha sort with numbers. Here's a neat SQL trick I'll drop for you: SELECT * FROM (SELECT '@WUA_Minimum@' AS WUA UNION SELECT '@WUA_Current@' AS WUA) AS `WUAs` ORDER BY WUA LIMIT 1 And here it is in a script step: What's going on here? You're basically making a tiny table with one column and two rows, sorting it, and then taking the first row. This will always give you the "lower" of the two versions we're comparing, which we can then check against our minimum version. If our WUA is in compliance, the endpoint's current WUA should be at or above minimum, so it sorts above the minimum, and our variable check then succeeds and we issue a PASS. If not, screw you, endpoint. Your FAIL means I'm sending you to the WUA Remediation Groups for a 12 Step script. Of course, all these fine updates require a restart, so you'll want to factor that into your scheduling. Make sure you re-populate your EDFs after you restart your targets, as the WUA version won't update until after the restart. A NOTE ABOUT BANDWIDTH With the July 2016 WUA updates for the above operating systems, Microsoft fundamentally changed the way the WUA checks for updates from Microsoft to make them a lot faster. The WUA used to make lots and lots of queries to Windows Update over and over again, encrypting and decrypting everything. This took a long time, and certain versions of the WUA (screw you forever, KB3083324) could even peg your CPU while doing it. Now, the most recent WUA versions download the entire OS update manifest from Microsoft in one big gulp and do the comparisons locally. This is faster and more efficient, but if you've triggered all your Hotfix Inventories during the day for all systems at once in that one client location that's still plugging along with its stupid bonded T1, they'll probably notice a bit of a bandwidth hit. I also try to do WOL wherever possible, do the majority of my hotfix inventories in the evenings, and do my hotfix inventories for laptops additionally during the day around lunchtime because you know those suckers are going straight back into users' bags at the end of the day. II. Master Your Update Scheduling. The thing about the Windows Update process is that it can literally only do one thing at a time. If you're trying to deliver an update but then you suddenly shoot a Hotfix Inventory command at your agent, you'll break the process. If you're doing a Hotfix Inventory but then also do an Update Config, you'll break the inventory because, believe it or not, the Update Config command also deliberately stops and restarts the Windows Update service. Ugh. What the hell, Automate. Why would you do that? This tells us that we have to be careful how we schedule things that are Windows Update-related. Make sure your Resend Hotfix Inventory templates are scheduled away from your Update Config templates. Make sure any update delivery is scheduled away from both of the above. Make sure the Windows Update service isn't being restarted by some other process or software (I'm looking at YOU, LogMeIn). Have you ever looked at your patch command history to find a patch that failed with error code 0x8024001E? That's the code you get when the Windows Update service gets stopped while it's attempting to perform a task. There are other causes of that error, but the biggest one (assuming your WUA is up-to-date) is a scheduling conflict. You've also got to look at your templates and schedules. Did you know that any inventory command run by a schedule doesn't actually show up in your Commands history? The inventories run, but they're run by the agents directly without being queued and issued from the database agent first. Raise your hand if you found that out the hard way (hi). III. Tracking and Fixing Windows Update Problems Stuff breaks. With Windows Update, it might be a temporary thing (interrupted download, cosmic rays, the usual), or it might be something that requires a little extra abuse. Ideally, you want an automatic solution that 1. Fixes stuff automatically where automatic fixing is possible 2. Recognizes intermittent failures and allows endpoints to "try again" 3. Informs the admin when something's wrong that can't be automatically fixed. Because every version of Windows since Vista still uses the Side-by-Side mechanisms (that's what WinSxS stands for, btw) for installing and linking updates, these steps conveniently apply to every supported version of Windows as of this writing. As with so many things with Automate, there are a million ways to do it. I'm a personal fan of the nightly health check. I wrote an offline script that runs through every eligible agent, looking for patterns of failures. Some of the patterns include: 1. More than three patch install failures in a row with no success 2. More than three hotfix inventory failures in a row with no success 3. Excessive patching (if patches are detected as missing and re-delivered multiple times per week, something's up) 4. Empty patching history When my health check script finds a problematic pattern, it makes a couple decisions. First, it checks to see if a Windows Update Repair has already been run for that endpoint by looking for an indicative alert. If that alert isn't already present, it sets an EDF that a WU Repair is needed. That EDF does two things. First, it exempts that endpoint from further attempts at patching through various checks, searches, and other conditionals. Second, it adds the endpoint to a group that fires off the Windows Update Repair script on a fairly assertive schedule. After the WU Repair itself gets run, it sets an Informational Alert on that agent indicating a successful run of the repair, and then the script clears its EDF. The Alert alters the behavior of the nightly health check because if it finds Windows Update problems and sees that a Repair has already been run, then throw a ticket because clearly that crap isn't fixing itself. What's in the Windows Update Repair script, you ask? Oh, the usual: 1. Stop the Windows Update service and nuke the subfolder contents of %windir%\SoftwareDistribution\ 2. Stop the Cryptographic Services service and nuke %windir%\System32\catroot2\ 3. Reset the winhttp proxy (can't hurt, right?) 4. Check for and remove any WSUS policies applied (various registry entries) If you feel like getting really clever, you could even try fixing or alerting on the type of patching error that gets spit out in your patching history text by correlating it with the type of patching error you get. Or, you can just look at this handy reference document and see what applies to you: A List of Windows Update error Codes IV. Make Sure You're Patching More Than Just The Operating System. This is a classic from the previous article, but it still applies: You know when you go into the Control Panel and you get that bit on the screen that says "You receive updates: ...For Windows and other products from Microsoft Update"? Did you know you can set that programmatically? It's ugly, but here's a line of PowerShell that you can throw into a script to do this for any supported Microsoft OS: $ServiceManager = New-Object -ComObject "Microsoft.Update.ServiceManager"; $ServiceManager.ClientApplicationID = "My App"; $ServiceManager.AddService2( "7971f918-a847-4430-9279-4a52d1efe18d",7,"") Hey, don't blame me, I didn't write the OS. V. Clear The "Pushed" Update Status ("Classic" Patch Manager only) Another classic from the previous article! The "Pushed" status for a patch means that LabTech has tried twice to install the patch and it didn't succeed, so it's not going to try to install it anymore. That's literally what it means, straight from the developer's mouth. Are you kidding? LabTech! Don't give up! You can do it! That's the behavior. And for my environment, it doesn't work. It's not effective, it gets in the way, and it lowers my patch efficacy numbers. You can't actually "get rid of" this functionality, but you can certainly reset the Pushed flags. I do it directly with a SQL query every night with a stored procedure. Here's the SQL: UPDATE hotfix SET pushed='0' WHERE installed=0 AND pushed=1 AND approved=1 Before you do this, if you want to see how many patches you're missing due to the Pushed flag, here's the SQL to get that count: SELECT COUNT(hotfix.`HotFixID`) FROM hotfix WHERE installed=0 AND pushed=1 AND approved=1 VI. Patch Early, Patch Often. Once you have your Windows Update agents tamed and responsive, there's usually no reason not to deliver updates during the day. You can leverage LabTech's location caches to mitigate bandwidth concerns, and most updates can be delivered without user interruption. I have EDFs carved into Client and Location tabs that make exceptions for our daytime patching scripts for clients paranoid about either functionality or bandwidth. But for the most part, I can run a Hotfix Inventory and/or a patching script against most of my clients several times a day and nobody ever says "boo". And you might as well do it, because you know that at 5pm or even earlier, that laptop is going back in the bag, that kiosk machine with the Wi-Fi card isn't going to respond to WOL, and you won't be able to update a darn thing. One exception to this rule, however, includes the Malicious Software Removal Tool which, if you install this sucker during the day, will immediately run its scan. Depending on available resources, this might produce unwanted (read: "slow and sketchy") behavior on an endpoint. YMMV, the usual disclaimers apply, don't say BGags didn't warn you. VII. Don't Rest On Your Laurels. Okay, you did all these things, you've got your patching percentage up pretty good, you've got scripts that work and a schedule that seems to work flawlessly. Good, congratulations, you should be proud of the work you did. Please get used to the idea that this is all going to unravel in six months. Microsoft is known for pulling the rug out from under your updates and changing their behavior with little warning. Remember October 2016 when most updates went cumulative and you had to check for a veritable nest of KB articles installed on any given endpoint to see if it was protected against WannaCry? Did you actually want to cry? Patching will never be a "set it and forget it" game, which is why Plugins are good: You don't have to do the work. Ok, that's all I got.
  48. 2 points
    The LTTray process is started by the LTSvc.exe process (which runs as a service). LTTray is responsible for reporting back logged in users that Labtech can interact with. Sometimes there is a port conflict or other issue that hangs LTTray. Users may complain of an unnamed white box showing up as a running program on their taskbar. You may notice the agent doesn't report that anyone has logged in, or that the agent is not applying a selfupdate successfully. I have created a monitor that looks for computers with lttray and dwm processes running, which are strong indicators that a user is logged in, but are reporting that a user is 'Not logged in'. I also have an Autofix script that will stop the ltservice and lttray processes, test for a process blocking the LTTrayPort, and move the port to a new number if needed. It will then restart the services and verify that the correct user status is now being reported. Monitor Labtech Agent Service Restart.zip
  49. 2 points
    From my experience: If the script is flagged as online (not an offline script), you would definitely use something like the example loop that waits for the agent to check in after a reboot before continuing. *BUT* I believe the script engine will kill the script after 15 minutes without agent contact even if you are safely in a server side loop. LabTech officially considers the agent offline, therefore an online script cannot run. This timeout may or may not be hardcoded, I do not know. If the script is an offline script, I think you can run indefinitely without the Script Engine killing the script (for reasonable levels of indefinitely) *IF* you are running server side commands (like the example loop that looks for the agent to check in again, with 30-60 second sleeps inside it). If you used a script function that requires a remote command the script will block while/until the command executes, but the script engine might decide that the agent command timed out and fail the script step, and if Continue on Failure isn't set for the step then your script will die. The loop is the safe way to avoid this problem. I have handled this a couple of different ways depending on if I simply need a reboot function in a script or if I am running something that I expect to need to reliably run across many reboots: I have an agent reboot script that I can call as a function script. The script issues the reboot, then loops until the agent checks back in (something like the example loop shown), then waits for another 60 seconds for good measure and exits. Control returns to the original script exactly where it called the reboot script, but with the agent rebooted, responding, and ready to go again. I have setup/staging scripts that are specifically designed to handle reboots by virtue of "checkpoints" if you will. There is a variable I define named "ScriptStage". I have labels for each stage, and as it enters each stage it defines the stage variable. As the script progresses it steps through one stage and into the next, increasing the ScriptStage as it goes. Within each stage if a reboot is needed, I jump to a ":PerformReboot" label. This section calls the SAME script for 5 or 10 minutes from now (queues the script), issues a reboot to the agent and then exits. Now the queued script inherits all variables (state) from the old script instance, and the script engine waits until the agent checks back in (it's an online script, so it can stay queued for hours, days maybe even). No loops needed, the script engine won't run it until it sees a check-in. When the agent comes back and the queued script does start up, it checks if the "ScriptStage" variable is already defined and then calls a "MatchGoto" function like: MatchGoto: (0,:BeginStage0),(1,:BeginStage1),(2,:BeginStage2),(3,:BeginStage3),(4,:BeginStage4),(5,:BeginStage5),(6,:BeginStage6),(7,:BeginStage7),(8,:BeginStage8),(9,:BeginStage9),(10,:BeginStage10),(:UndefinedStage) This jumps the script right back to the beginning of the stage it was in when the reboot was issued. It doesn't resume at the exact same script line (it starts the stage over), but that is acceptable for my purposes.
  50. 2 points
    I use this batch file as a startup script or as a scheduled task assigned by a GPO to deploy/maintain LabTech Agent installations. I normally place it under the \\DOMAIN\NetLogon share where it can be run manually on a workstation as well. The only customization needed is the value defined for "LTServerHostname". This just needs to be a partial match of the LT Server URL so that the script can tell if the LabTech installation belongs to you or someone else. The script beginning tests if LTService is started and exists if it is running. It checks the LTServer name and if it is not yours it moves right to the removal steps. If it is yours, it checks that the LTSvcMon service is running and if so, it exits. If neither is running it checks to make sure that the service exists and tries to start it. If it starts successfully the scripts exits. This allows the script to be run as often as desired, and it will only proceed to install if the current agent is missing or broken. If the service won't start, doesn't exist, or is associated with the wrong LT Server, it enters the next stage, "PrepareForInstall". Here it verifies that the .Net requirements have been met (I haven't added in any PowerShell requirements) and uses Ninite or DISM to install .Net 3.5 if needed. It also detects Terminal Servers and enters TS Install Mode. The next stage is "StartUnInstall", where it kills processes, stops and deletes the services, and thoroughly rips out the old installation by removing registry keys and renaming the LTSvc folder. This is the section you would be interested in for a LabTech Agent Uninstall script. The last stage is the "BeginLTInstall" stage, where the agent installer is launched. @echo off REM NOTE - Script operation relies on ninitepro.exe and ltsilent.exe being in the same folder as this batch file. (Should probably test for this, right?) SETLOCAL SET "LTServerHostname=MYMSPNAME.hostedrmm.com" SET "TSMode=QUERY" REM DATE/TIME format expected: "Day MM/DD/YYYY HH:MM:SS.ss" - Other formats will break the value of "DATETIME" SET "DATETIME=%date:~10,4%%date:~7,2%%date:~4,2%-%time:~0,2%%time:~3,2%%time:~6,2%" REM Logged output is dumped out at the end of the script. LOGGINGPATH must be defined. REM If you want each command's output to be dumped to CON as the script executes, set LOGGINGPATH=CON SET LOGGINGPATH="%temp%\ltagentdeploy-%DATETIME: =%.txt" REM To skip the safety checks for an existing installation, uncomment the next line. REM GOTO PrepareForInstall REM To skip the prerequisite checks, uncomment the next line. REM GOTO StartUnInstall REM Checking for running Labtech Service, Exit if found. sc query LTService 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"RUNNING" > NUL && EXIT /B REM Checking for correct server registration, advance to install if valid server address not found reg query "HKLM\Software\LabTech\Service\Settings" 2>NUL | findstr /i /c:"%LTServerHostname%" > NUL || GOTO PrepareForInstall REM Checking for running Labtech Service Monitor, Exit if found. sc query LTSvcMon 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"RUNNING" > NUL && EXIT /B REM Checking for Labtech Service, advance to install if not found sc query LTService 2>NUL | findstr /i /c:"SERVICE_NAME" > NUL || GOTO PrepareForInstall REM Try to start the services, see if they are failing and need to be reinstalled net start LTService > NUL 2>&1 net start LTSvcMon > NUL 2>&1 REM Pause for 5 seconds ping -n 5 127.0.0.1 > NUL 2>&1 REM Checking for running Labtech Service, Exit if found. sc query LTService 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"RUNNING" > NUL && EXIT /B REM Pause for 5 seconds ping -n 5 127.0.0.1 > NUL 2>&1 REM Checking for running Labtech Service, Exit if found. sc query LTService 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"RUNNING" > NUL && EXIT /B REM Something is wrong, let's reinstall the agent. This is the point of no return unless the script encounters an error. :PrepareForInstall REM Fixup the log path in case it was set with quotes. SET LOGGINGPATH=%LOGGINGPATH:"=% REM Determine Windows Product and Version SET "WVER=" FOR /F "usebackq tokens=2 delims==" %%A IN (`type "C:\Windows\system32\prodspec.ini" 2^>NUL ^| find /i "Product=" `) DO @SET "WVER=%%~A" IF NOT DEFINED WVER FOR /F "usebackq tokens=2*" %%A IN (`reg query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion" /v ProductName 2^>NUL ^| find /i "ProductName"`) DO @SET "WVER=%%~B" FOR /F "usebackq tokens=2 delims=[]" %%A IN (`ver`) DO SET "WVER=%WVER% %%~A" SET "WVER=echo %WVER%" REM Unknown Windows %WVER% | FINDSTR /R /C:"Version [5-9]\." /C:"Version 10\." > NUL || ( Call ::TheEnd ERROR - Unrecognized Windows Version & exit /b 1 ) pushd "%WINDIR%\Temp" >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Check for Server OS: >> "%LOGGINGPATH%" %WVER% | FINDSTR /R /C:" Server " > NUL && ( ECHO %DATE% %TIME% - Found. Check for Terminal Server >> "%LOGGINGPATH%" CHANGE USER /QUERY 2>NUL | FINDSTR /I /C:"Remote Administration" > NUL || ( ECHO %DATE% %TIME% - Terminal Server found. Check for Terminal Server Install Mode >> "%LOGGINGPATH%" SET "TSMode=EXECUTE" CHANGE USER /QUERY 2>NUL | FINDSTR /I /C:"Execute" > NUL || SET "TSMode=INSTALL" ) ) IF "%TSMode%"=="EXECUTE" CHANGE USER /INSTALL >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Detecting Windows Version and Checking Prequisites >> "%LOGGINGPATH%" %WVER% detected >> "%LOGGINGPATH%" REM Windows XP/2003 %WVER% | FINDSTR /R /C:"Version 5\.[12]" > NUL && ( ECHO %DATE% %TIME% - Windows XP/2003 - Launch Ninite to deploy .Net if not installed: >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" /C:"\v4." > NUL || ( copy /Y "%~dp0\ninitepro.exe" "%WINDIR%\Temp" && cmd /c "CD /D "%WINDIR%\Temp" & "%WINDIR%\TEMP\Ninitepro.exe" /select ".NET" /silent "%LOGGINGPATH%-ninite" " ) >> "%LOGGINGPATH%" 2>&1 type "%LOGGINGPATH%-ninite" >> "%LOGGINGPATH%" & DEL /Q /F "%LOGGINGPATH%-ninite" >NUL 2>&1 ECHO %DATE% %TIME% - Confirming .NET is installed >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" /C:"\v4." > NUL || ( Call ::TheEnd ERROR - .NET Components could not be installed & exit /b 1 ) %WVER% | FINDSTR /R /C:" Server " > NUL && ( ECHO %DATE% %TIME% - See http://support.microsoft.com/kb/3072630 if installation fails due to crypt32 errors (Failure to establish SSL connection to LabTech server) >> "%LOGGINGPATH%" REM KB3072630 Downloads - https://technet.microsoft.com/library/security/ms15-074 REM KB3072630 supercedes previous solutions KB938397 and KB968730. REM ECHO %DATE% %TIME% - See Microsoft KB938397 for a hotfix that may be needed to complete agent installation. (Resolves crypt32 error establishing SSL connections) >> "%LOGGINGPATH%" REM ECHO %DATE% %TIME% - KB - http://support.microsoft.com/kb/938397 >> "%LOGGINGPATH%" REM ECHO %DATE% %TIME% - Windows 2003 32Bit Download Location: (http://hotfixv4.microsoft.com/Windows%%20Server%%202003/sp3/Fix200653/3790/free/315139_ENU_i386_zip.exe) >> "%LOGGINGPATH%" REM ECHO %DATE% %TIME% - Windows 2003 64bit Download Location: (http://hotfixv4.microsoft.com/Windows%%20Server%%202003/sp3/Fix200653/3790/free/315159_ENU_x64_zip.exe) >> "%LOGGINGPATH%" ) ) %WVER% | FINDSTR /R /C:"Version 5\.[12]" > NUL && GOTO StartUnInstall REM Windows Vista/7/2008 %WVER% | FINDSTR /R /C:"Version 6\.1" > NUL && ( ECHO %DATE% %TIME% - Windows Vista/7/2008 - Launch Ninite to deploy .Net if not installed: >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" /C:"\v4." > NUL || ( copy /Y "%~dp0\ninitepro.exe" "%WINDIR%\Temp" && cmd /c "CD /D "%WINDIR%\Temp" & "%WINDIR%\TEMP\Ninitepro.exe" /select ".NET" /silent "%LOGGINGPATH%-ninite" " ) >> "%LOGGINGPATH%" 2>&1 type "%LOGGINGPATH%-ninite" >> "%LOGGINGPATH%" & DEL /Q /F "%LOGGINGPATH%-ninite" >NUL 2>&1 ECHO %DATE% %TIME% - Confirming .NET is installed >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" /C:"\v4." > NUL || ( Call ::TheEnd ERROR - .NET Components could not be installed & exit /b 1 ) ) %WVER% | FINDSTR /R /C:"Version 6\.1" > NUL && GOTO StartUnInstall REM Windows 8 or 2012 %WVER% | FINDSTR /R /C:"Version 6\.[2-9]" /C:"Version [7-9]\.[0-9]" > NUL && ( ECHO %DATE% %TIME% - Windows 8 or 2012 - Activate built-in .NET if not installed: >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" > NUL || DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /NoRestart >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Confirming .NET is installed >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" > NUL || ( Call ::TheEnd ERROR - .NET Components could not be installed & exit /b 1 ) ) %WVER% | FINDSTR /R /C:"Version 6\.[2-9]" /C:"Version [7-9]\.[0-9]" > NUL && GOTO StartUnInstall REM Windows 10 or 2016 %WVER% | FINDSTR /R /C:"Version 10\." > NUL && ( ECHO %DATE% %TIME% - Windows 10 or 2016 - Activate built-in .NET if not installed: >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" > NUL || DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /NoRestart >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Confirming .NET is installed >> "%LOGGINGPATH%" reg query "HKLM\Software\Microsoft\NET Framework Setup\NDP" | findstr /I /C:"\v3.5" > NUL || ( Call ::TheEnd ERROR - .NET Components could not be installed & exit /b 1 ) ) %WVER% | FINDSTR /R /C:"Version 10\." > NUL && GOTO StartUnInstall Call ::TheEnd ERROR - Unrecognized or Unsupported Windows Version exit /b 1 REM This is the point of no return for uninstalling/reinstalling. The script will terminate services and rename the LTSvc folder, so the existing installation is definitely going down. :StartUnInstall ECHO %DATE% %TIME% - Stop LTSvcMon if it is running >> "%LOGGINGPATH%" sc query LTSvcMon 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"STOPPED" > NUL || ( SC STOP LTSvcMon >> "%LOGGINGPATH%" 2>&1 TASKKILL /im ltsvcmon.exe /f >> "%LOGGINGPATH%" 2>&1 ) ECHO %DATE% %TIME% - Stop LTService if it is running >> "%LOGGINGPATH%" sc query LTService 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"STOPPED" > NUL || ( SC STOP LTService >> "%LOGGINGPATH%" 2>&1 TASKKILL /im ltsvc.exe /f >> "%LOGGINGPATH%" 2>&1 ) ECHO %DATE% %TIME% - Stop Process LTTray if it is running >> "%LOGGINGPATH%" TASKKILL /im lttray.exe /f >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Stop LabVNC if it is running >> "%LOGGINGPATH%" sc query LabVNC 2>NUL | findstr /i /c:"STATE" | findstr /i /c:"STOPPED" > NUL || ( SC STOP LabVNC >> "%LOGGINGPATH%" 2>&1 TASKKILL /im labvnc.exe /f >> "%LOGGINGPATH%" 2>&1 ) ECHO %DATE% %TIME% - Stop LabTech Update process if it is running >> "%LOGGINGPATH%" TASKKILL /im labtechupdate.exe /f /t >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Rename Old LTSVC Folder if it exists >> "%LOGGINGPATH%" IF EXIST "%WINDIR%\LTSvc\." IF EXIST "%WINDIR%\LTSvc.Old\." RMDIR "%WINDIR%\LTSvc.Old" /Q /S >> "%LOGGINGPATH%" 2>&1 IF EXIST "%WINDIR%\LTSvc\." MOVE /Y "%WINDIR%\LTSvc" "%WINDIR%\LTSvc.Old" >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Cleanup LTUpdate folder if found. >> "%LOGGINGPATH%" IF EXIST "%TEMP%\_LTUpdate" RMDIR "%TEMP%\_LTUpdate" /Q /S >> "%LOGGINGPATH%" 2>&1 IF EXIST "%WINDIR%\TEMP\_LTUpdate" RMDIR /Q /S "%WINDIR%\TEMP\_LTUpdate" >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Remove LTSvcMon Service if it exists >> "%LOGGINGPATH%" sc query LTSvcMon 2>NUL | findstr /i /c:"Service_name" > NUL && SC DELETE LTSvcMon >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Remove LTService Service if it exists >> "%LOGGINGPATH%" sc query LTService 2>NUL | findstr /i /c:"Service_name" > NUL && SC DELETE LTService >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Remove LabVNC Service if it exists >> "%LOGGINGPATH%" sc query LabVNC 2>NUL | findstr /i /c:"Service_name" > NUL && SC DELETE LabVNC >> "%LOGGINGPATH%" 2>&1 REM Search and Destroy existing installations :UninstallCleanup FOR /F "usebackq tokens=*" %%A IN (`reg query "HKLM\Software\Microsoft\windows\currentversion\uninstall" /k /f "*" 2^>NUL ^| FINDSTR /r /c:"\{........-....-....-....-............}$"`) DO REG QUERY "%%A" /v DisplayName 2>NUL | FINDSTR /i /r /c:"Labtech.* Agent" > NUL && CALL ::UninstallGUID "%%~A" >> "%LOGGINGPATH%" 2>&1 FOR /F "usebackq tokens=*" %%A IN (`reg query "HKLM\Software\WOW6432Node\Microsoft\windows\currentversion\uninstall" /k /f "*" 2^>NUL ^| FINDSTR /r /c:"\{........-....-....-....-............}$"`) DO REG QUERY "%%A" /v DisplayName 2>NUL | FINDSTR /i /r /c:"Labtech.* Agent" > NUL && CALL ::UninstallGUID "%%~A" >> "%LOGGINGPATH%" 2>&1 ECHO %DATE% %TIME% - Cleanup other Registry Keys if found >> "%LOGGINGPATH%" REG QUERY "HKLM\Software\LabTech" /ve 2>NUL | FINDSTR /i /c:"LabTech" > NUL && ECHO %DATE% %TIME% - Deleting Key "HKLM\Software\LabTech" >> "%LOGGINGPATH%" && REG DELETE "HKLM\Software\LabTech" /F >> "%LOGGINGPATH%" 2>&1 REG QUERY "HKLM\Software\WOW6432Node\LabTech" /ve 2>NUL | FINDSTR /i /c:"LabTech" > NUL && ECHO %DATE% %TIME% - Deleting Key "HKLM\Software\WOW6432Node\LabTech" >> "%LOGGINGPATH%" && REG DELETE "HKLM\Software\WOW6432Node\LabTech" /F >> "%LOGGINGPATH%" 2>&1 GOTO UninstallCleanupEnd :UninstallGUID SETLOCAL SET "_K=%~nx1" IF "%_K%"=="" exit /b 0 SET "_RK=%_K:~8,1%%_K:~7,1%%_K:~6,1%%_K:~5,1%%_K:~4,1%%_K:~3,1%%_K:~2,1%%_K:~1,1%%_K:~13,1%%_K:~12,1%%_K:~11,1%%_K:~10,1%%_K:~18,1%%_K:~17,1%%_K:~16,1%%_K:~15,1%%_K:~21,1%%_K:~20,1%%_K:~23,1%%_K:~22,1%%_K:~26,1%%_K:~25,1%%_K:~28,1%%_K:~27,1%%_K:~30,1%%_K:~29,1%%_K:~32,1%%_K:~31,1%%_K:~34,1%%_K:~33,1%%_K:~36,1%%_K:~35,1%" FOR /F "usebackq tokens=2,*" %%A IN (`REG QUERY "%~1" /v DisplayName 2^>NUL ^| FIND /i "DisplayName"`) DO ECHO %DATE% %TIME% - Uninstalling Product "%%~B", GUID %_K% start /b /wait MSIEXEC /X %_K% /QN /NORESTART ECHO %DATE% %TIME% - Uninstall Command Completed ECHO %DATE% %TIME% - Cleaning registry keys and folders related to %_K% REM echo Delete Key HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Classes\Installer\Products\%_RK% REG DELETE "HKLM\SOFTWARE\WOW6432Node\Classes\Installer\Products\%_RK%" /f 2>NUL && ECHO %DATE% %TIME% - Removed Registry Key "HKLM\SOFTWARE\WOW6432Node\Classes\Installer\Products\%_RK%" REM echo Delete Key HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products\%_RK% REG DELETE "HKLM\SOFTWARE\Classes\Installer\Products\%_RK%" /f 2>NUL && ECHO %DATE% %TIME% - Removed Registry Key "HKLM\SOFTWARE\Classes\Installer\Products\%_RK%" REM echo Delete Folder C:\Windows\Installer\%_K% IF EXIST "C:\Windows\Installer\%_K%\." rmdir /q /s "C:\Windows\Installer\%_K%" 2>NUL && ECHO %DATE% %TIME% - Removed Directory "C:\Windows\Installer\%_K%" IF EXIST "C:\ProgramData\Package Cache\%_K%\." rmdir /q /s "C:\ProgramData\Package Cache\%_K%" 2>NUL && ECHO %DATE% %TIME% - Removed Directory "C:\ProgramData\Package Cache\%_K%" REM echo Delete key %~1 REG DELETE "%~1" /f 2>NUL && ECHO %DATE% %TIME% - Removed Registry Key "%~1" ENDLOCAL :UninstallGUIDEnd exit /b 0 :UninstallCleanupEnd ECHO %DATE% %TIME% - System prerequisites confirmed.>> "%LOGGINGPATH%" :BeginLTInstall ECHO %DATE% %TIME% - Copy and launch LTSilent.exe. >> "%LOGGINGPATH%" COPY /Y "%~dp0\LTSilent.exe" "%WINDIR%\Temp" >> "%LOGGINGPATH%" 2>&1 start /b /wait "LT" cmd /c ""%WINDIR%\Temp\LTSilent.exe" /Q /S >> "%LOGGINGPATH%" 2>&1" ECHO %DATE% %TIME% - LTSilent.exe process has completed. >> "%LOGGINGPATH%" timeout /t 5 /nobreak > NUL sc query LTService 2>NUL | findstr /i /c:"Service_name" > NUL || ( Call ::TheEnd ERROR - LTService was not successfully installed & exit /b 1 ) ECHO %DATE% %TIME% - Success! LTService is installed. >> "%LOGGINGPATH%" popd :TheEnd IF NOT "[%~1]"=="[]" ECHO %DATE% %TIME% - %* >> "%LOGGINGPATH%" %WVER% | FINDSTR /R /C:" Server " > NUL && IF NOT "%TSMode%"=="QUERY" CHANGE USER /%TSMode% >> "%LOGGINGPATH%" 2>&1 IF NOT "%LOGGINGPATH%"=="CON" ( TYPE "%LOGGINGPATH%" 2>NUL ) ENDLOCAL exit /b This supports and has been tested on XP-Windows 10/Server 2003-2012R2. Server 2016 should be supported, but has not been tested.
ร—
ร—
  • Create New...