Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

tlphipps last won the day on April 21

tlphipps had the most liked content!

Community Reputation

18 Good

1 Follower

My Information

  • Location
    Dallas, TX
  • Agent Count
    3000+

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We have a script we put together for this. We built the core of it in powershell and then just use the "Script Execute" command in a CW Automate script to run our powershell. Here's what we're using: # Get list of accounts in local administrators group that are NOT allowed (this filters the allowed accounts) # We have to leave 'administrator' as it's a built-in account. But we disable the account via ThirdWall. $remove = net localgroup administrators | select -skip 6 | ? {$_ -and $_ -notmatch 'successfully|^administrator|^msp.localadmin|^xyz.devadmin$'} # remove the accounts foreach ($user in $remove) { net localgroup administrators $user /delete } We then just have this script scheduled to run against all machines at this client periodically (every 3-4 days I think).
  2. I want to publicly thank both @dfleisch and @herrchin for their engagement in this thread and conversation. It's totally OK to disagree and have different thoughts/approaches to things. As a long-time CW partner (Labtech user since 2011), I agree fully with @herrchin's suggestions and well-reasoned responses. I appreciate @dfleisch sharing the 'how we got here' information but, honestly, I think it's more telling of the failures on the CW side in properly setting standards and expectations for partners. I want to be VERY clear, that's not a jab or knock toward @dfleisch. It's a comment on CW, the company's, approach. @herrchin does a great job here explaining our (partner) frustrations with the approach CW has taken on this and while it's great they came up with some benchmarks, I think it's clear from comments by both parties that the benchmarks being used aren't actually correct for the real-life workloads seen on CW Automate servers. So to steal @herrchin's example, CW is asking us to measure apples where oranges are what's needed. That's definitely doing a disservice to those partners (current and future) who are using or looking at using Azure or other services that have 'scored badly' on the apples test. Not that anyone cares at this point, but I'll reiterate that we're happily running Automate in Azure on a single VM (8 CPU, 56GB RAM) with over 5,300 agents. And I'm about to grow that to close to 6,000 agents before the year is out. I've contemplated splitting our server roles, but to be honest, everyone that looks at our instance performance (including employees coming from other Automate shops) is blown away at the performance we have (compared to Automate in general.....we still hit the usual Automate slowdown points). Yes, it's an expensive VM. But it's how we've chosen to structure our business. It's working well for us. And it's performing awesomely. Even when paging to disk, etc. we see NO slowdown due to disk i/o. But our 'apples score' using the current diskspd recommendations is abysmal. I truly hope some CW product managers see and engage with the information from this thread. Especially the points/info raised in @herrchin's last post. THAT is how CW can improve the product and their partner satisfaction and confidence level.
  3. We don’t do this universally at this point but we certainly have it in place for our clients with no onsite servers. Doesn’t really need much horsepower. Most of ours are retired desktops or laptops from our internal uses. There’s no need or requirement for probes being on DCs.
  4. Whoops! Definitely didn't mean to post up here with our CWC Identifier in there. Thanks @mike_judd for the head's up on that. I've replaced the XML above with a generic version. I went ahead and added a 'variable set' line where you can easily add your specific CWC Identifier in each section of the script.
  5. I'm guilty of grabbing @DarrenWhite99's scripts previously and modifying and using them but never posting back. First off, Darren, YES these work on macOS quite well (or did back then). One thing we discovered recently was that the program directory on macOS was changed during a a mid-6.9 release of CW Control. So we updated our "Get GUID" script to properly grab the GUID from the old location AND the new location. This has been working well for us. Here's our updated "Get GUID" script. Technically is still supports Windows, but due to the plugin handling that directly now, we really only use this for macOS clients. We just call this script at the end of our "install" script on macOS machines to update the CWA database so the link from CWA to CWC exists for Mac agents just like it does for Windows agents. CW Control Get GUID - CUSTOM.xml
  6. We leave the maint. window set weekly even though patches only really roll out monthly. It's a 2 hour gap in 'monitoring' during an overnight period. We find that acceptable since we still get alerted when the maintenance period is over for any machines still offline (I can't think of a time that has happened in last 10 years)
  7. Just make sure you have maintenance windows defined for your patching period. In our case, our patching window runs from 2-4am. So we have a maintenance window defined during that time for all servers. No alerts then regardless of what machines are up/down since they're all rebooting during that period. @Dayrak I'm really not sure how else to explain it. We patch hyper-v hosts just like any other server. We don't treat them differently AT ALL. In the configuration of each Hyper-v VM, we make sure to choose the Automatic Stop Action as 'save.' (see screenshot). Our experience is this causes ZERO issues. Even if the VM is in the middle of patching when host reboots. The VM just starts right back up and finishes what it was doing with the VM turns back on.
  8. @dfleisch I greatly appreciate you jumping in with all of this incredibly detailed and helpful information. It's great to see CW employees contributing more and more in these forums. I don't doubt that MS is doing 'trickery' with how they implement and present some of the Azure virtual hardware. I will say though that I thought some of your comments toward the end of the post were maybe a bit more snarky than they needed to be. Or maybe I just read too much into it. I completely respect your expertise and appreciate the technical detail. At the end of the day, we're currently running over 5,300 agents on a single Azure VM and we're quite happy with the performance. But as we continue growing I'm certainly always evaluating our choices of on-prem/cloud/whatever to make sure it works well for our business.
  9. My understanding is that 'managed disks' do NOT need storage space striping to achieve great I/o. But I don't have any VMs using them at this point to confirm that. I CAN confirm that I get pretty dismal results when running that command from CW on my Azure system. Basically reporting 15.25MiB/s. I can also confirm that with 5,400 agents on our system, there's NO WAY that's indicative of our actual performance. Sorry I can't help with CW refusing to move forward though. No real ideas there.
  10. Same here. We couldn't do what we do without the awesome sharing that takes place on this forum and in Slack. Anywhere I can give back in any way, I'm 100% in. And I also love seeing/hearing what others are doing with Azure too so I know I'm not on an island!
  11. Yeah, that's often a topic of conversation here. For us we have to factor in hardware cost, colo cost (or redundancies added to office), we're already a multi-state organization so Azure helps with that some, We're pushing clients to Azure so being 'all in' ourselves helps a bit. And in the 4 years we've been in Azure, we've not had a single outage. Not that it CAN'T happen. But awfully nice never really worrying about our internal 'infrastructure.' Pricey for sure, but as long as we're pricing our services correctly, if you spread that cost amongst 5,400 agents, it's not all that bad. I'm seriously considering doing a reserved instance for this to really cut the cost down.
  12. Sure. We're currently running on a DS13 (8 vCPU; 56GB RAM). I'm interested in some of the newer sizes and especially in managed disks, but sadly can't easily switch to either of those. So if somebody else wanted to do some testing for me...…...that'd be awesome!
  13. Actually really glad somebody else asked this. I recently ran the diskspd tool against our Azure instance as well and get results well below what CW recommends. But we're now at 5,400 agents and still see really great performance IMHO. I mentioned previously we're using striped disks in storage pools. I also mentioned increasing our instance size. We're up to an instance size with 56GB RAM and have 50GB available for MySQL. After some DB tuning to clear out crap, our DB size hovers around 46GB or so which means we're basically keeping the whole thing in RAM. When I look at disk performance using resource monitor, I rarely see spikes or queue build-up which says to me that we don't really have any disk performance issues despite what the CW recommended tool is saying. For now I'm happy where we're at and after doing some DB cleanup and getting all the latest patches for CWA installed, our performance is better than ever. I'm definitely starting to look at split-server config as we continue growing. But based on what I've seen/heard from others on performance, I'm not sure I really expect much gain over what we have right now.
  14. Post-upgrade we're still looking good on this. I fully echo what @Jacobsa said above. I really hope this is a sign of 'more to come' with CW engaging with this community like this to help provide workarounds and solutions. It's greatly appreciated.
×
×
  • Create New...