Jump to content

Search the Community

Showing results for tags 'snmp'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • MSPGeek
    • Announcements
    • The Geek Cast
  • ConnectWise Automate / Labtech
    • ConnectWise Automate / LabTech
    • ConnectWise Automate / LabTech - Development

Categories

  • ConnectWise Automate
    • Scripts
    • Plugins
    • SQL Snippets
    • Role Definitions
    • Automate PowerShell Code
    • Reports
    • Internal Monitors
    • Remote Monitors
  • ConnectWise Manage
    • API Interacting Code

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Location


Agent Count


INTERESTS


OCCUPATION


ICQ


WEBSITE


WLM


YAHOO


AOL


FACEBOOK


GOOGLEPLUS


SKYPE


TWITTER


YOUTUBE

Found 6 results

  1. Hey everyone, I'm looking for a bit of help. I am trying to monitor HP MSA SAN devices through LT. I have read all of the threads I could find related to this topic. I have the MSA mibs installed on the LT server and I have built custom monitors using the OIDs that worked for HP servers, but they aren't alerting. I set the Community in the LT Network Device pop-out to ensure it was looking with the right community string and I set the credentials to use a monitor login that is set in the HP's User Account section. I did a SNMPwalk against the device and found that it doesn't have the OIDs to match the server. I have added a custom community string and walked against that to verify that I am getting info back, just not the info that I need Has anyone successfully built monitors for HP Storage Devices with LT? If you didn't build custom monitors, how are you monitoring yours? I know there is a built in email option, but we are trying to do this through SNMP instead. Specifically, I am trying to monitor HP MSA 2050 devices, but I know that working monitors for previous generations should work. Any help anyone can offer is greatly appreciated. Brett
  2. In case it helps anyone, I found a quick-and-dirty way to monitor Dell PowerVaults for hardware or RAID problems without mucking-around with SNMP. Dell's Modular Disk Storage Manager (MDSM) software includes a command-line tool (SMcli.exe) that can output a list of the storage arrays it knows about and a brief health-status descriptor for each. "C:\Program Files (x86)\Dell\MD Storage Software\MD Storage Manager\client\smcli.exe" -d -v The output looks something like this: MAIN-SAN-1 [ip address] [ip address] Optimal BACKUP-SAN-1 [ip address] [ip address] Needs Attention SMcli completed successfully. So I created a remote EXE monitor on the server running MDSM that runs the above command and checks if the output contains the string "needs attention".
  3. Standard Disclaimer Applies This is an attempt to document how I am monitoring SNMP for Proliant servers through Labtech. It is a merger of the previous 2 threads on the subject along with my notes from setting it up and muddling through SNMP after years of not looking at it at all. Forgive anything I put in wrong, and please comment it so I can change the content and wording to be as exact as possible. NOTE: This is not my whole solution, but should be enough for anyone to build out their own. Once I figured it out I created all the monitors I wanted ;). Process: Make sure the SNMP feature is installed Configure the SNMP service Make sure all the boxes are checked and set the contact and location information -- THIS STEP MUST BE COMPLETED MANUALLY ON EACH MACHINE Set up your community strings (one has to be read/write for insight, mine autocreated; one read only, for testing purposes I just used public,admin,proliant for my test values, change for production) Set it up to accept from the local host Install The Insight Management Agents and WBEM providers from HP and their prerequisites Go to the HP support site: https://support.hpe.com/hpesc/public/home Type in the box a recent server (DL380 gen9 or so) Click drivers & download Select OS ilo 3/4 Channel driver for server 2016 webpage (prereq) iLO 3/4 Management Controller Driver Package for Windows Server 2016 webpage (prereq) HP Insight management Agent x64 download webpage HPE Insight Management WBEM Providers for Windows Server x64 webpage My preferred method for this is to setup the standard build for the server (OS and features wise) with SNMP enabled and configured and then run the SPP on the machine and upgrade everything, etc. Create a group to add your remote monitors to. Add a known proliant server to the group for testing, preferably one you have quick physical access to. Start the SNMP service, and all the insight management agents listed as dependencies on the SNMP service. Add the MIB files from the HP SIM MIB kit. I found this website to be more helpful than manually trying to read through those MIB files (The link is to the MIB tree that contains the drive array status). Find 'condition'. For now I am only using Scalar Integers with fast and easy go/no-go criteria. Update: My Manual Run script pulls the end OIDS from a table and creates a monitor for each of them, food for thought. The Only ones I actually had to add for this were CPQHOST-MIB, CPQIDA-MIB, CPQHLTH-MIB if I remember correctly. But why not have those OIDS for when your probe walks and to use for detection templates!! See this thread on how to fix up those MIBS so they don't get pesky on import. Labtech's guidance on loading up those pesky MIB files. Add remote monitors against your group(s) on what you'd like to monitor. I added one for the SNMP Service against my mid level SNMP/Windows group. In addition to the following on the HP Proliant sub-group: I did the alert continuous alert style and made the subject something like "%name% %state% on %computername%" or "Proliant SNMP Monitoring Dependency failed" for success and failure. I then input in the failure/success messages what the return conditions mean to aid in troubleshooting. Remote monitors for these services at a minimum: CpMgHost, CpqNicMgmt,CqMgServ, CqMgStor (Name Convention: SVC - %servicename% (SNMP)) Remote Monitors for each OID you want to monitor (Name Convention: SNMP - HP Proliant - Drive Array) Test your monitors (pull a drive to degrade the array! unplug a power supply! hit a drive with a hammer before installing it! perform other testing actions!) One you have verified this is working on your test server, then I would import the XML scripts in the following order: EDFS in any order Scripts - I set this up so you have 3 groups (SNMP>Windows>HP) on the windows group you schedule run the scripts that install SNMP and configures it daily, and autojoin using search for EDF 'Use SNMP' and server OS, windows (not in my included scripts). Then on the HP Group the magic happens; there is the master script that first checks to make sure SNMP is marked as installed, then enabled, then installs all the software in order using the latest download links I could find. If it is already configured the script exits with no ticket. If windows can't get SNMP, you get the 'general failure' ticket. If any software fails, you get 2 tickets; one for the offending software, and the general one. So with only 3 scripts scheduled, it installs SNMP, configures it, and then installs all the prerequisites for SNMP monitoring on these devices. You have to setup your own ticket comments and finishes. For the application install scripts make sure they are all 'Isolated' and are using the latest EXEs (they get updated about every quarter or two). I used the following community strings on my test machine, search for references to them and change them to your production key like I did!: public,admin,insight,proliant You have to go into the Manual Run-% script to setup your community key in the SQLINSERT statement. This script creates remote monitors for each physical drive as reported by SNMP. Remote Monitors -- I will export some of the remote monitors I make. They are setup currently to monitor: Drive Arrays, Temperature System, CPU Fans, ASR, Resilient Memory, System Fans, and a few others. Recommend importing against a known proliant machine computerid (yes you have to modify each file, I set the computerid in the export to %computerid% on all files by opening them all in notepadplusplus then doing a replaced against my computeridfield )then drag to your groups. Add the searchs to the groups to add the rest of your servers. UPDATES: Added in hyperlink to Darren's MIB import fix batch file Changed wording to be more exact Added in what services I find it necessary to monitor added links Added a verbose roadmap added some scripts, edfs, searches, etc that I am using. Credits: Myself @DarrenWhite99 : Link inside on MIBs and help making the other things @Joe.McCall : https://www.labtechgeek.com/topic/2827-hp-server-hardware-monitoring/ @HickBoy : https://www.labtechgeek.com/topic/3756-hp-smartarray-monitoring/?do=findComment&comment=22997 SNMP HP Proliant.zip Remote Monitors.zip Physical Drive Ticketing (export).xml
  4. I am starting a new thread so as not to hijack or take the previous thread off topic, previous correspondence was given on an older post by @Duvak I am trying to set my templates up according to what @Duvak had stated, but I can't seem to get it to work If I walk the device I get OID 1.3.6.1.2.1.1.2.0 with a value of 1.3.6.1.4.1.14988.1, I tried your method of ^1.3.6.1.4.1.14988.1, but it doesn't detect I tried OID 1.3.6.1.2.1.1.1.0 which has a string value of RouterOS RB2011UAS, I tried (?!)RouterOS.* and I have tried the full string RouterOS RB2011UAS None of these methods seem to detect the device as a Firewall, it comes up as a generic Network Device
  5. Hi All, First time posting so please be nice . I've recently configured some HP Hardware monitoring (thanks Joe.McCall !) and now am getting alerts for some systems which is great. I now need to configure the following: 1. Drive failure should alert within 5 minutes. The remote monitor is set to run every 5 minutes, and is continuous.This takes care of this requirement but introduces an issue with ticket alerts being generated continuously. Changing the monitor to 'once' would fix this but would introduce another issue which I am trying to solve (below) 2. The ticket should re-open once daily if the issue is still present. This would 'protect' us if a tech closed the ticket without the issue being resolved. Technically I can address the above by creating two monitors, but then this would create two tickets for each alert. As an alternative, I could 'silence' the alerts for a period of time, perhaps 24 hours by default? I'm not sure how to do this, but coming from a Zabbix/Nagios background, I assume this is pretty simple? I would like to be able to do this anyway, or at least create an exception because another issue I have with the hardware alerts is that some will always fail. We have some HP servers with non-genuine memory and HP SNMP will report this as a failure. Without removing the server from the search group the SNMP checks are applied to, how can I create an exception? It looks like exceptions are only possible with Internal Monitors and not Remote Monitors? Any assistance would be greatly appreciated. Thank you!
  6. wchesley

    Scripting Probe Commands

    How can probe commands be scripted? I want to script an SNMP walk. I see the probecommands table in the database separate from the commands table How do I pass probe commands through a script?
×