Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

tlphipps last won the day on December 27 2019

tlphipps had the most liked content!

Community Reputation

21 Excellent

1 Follower

My Information

  • Location
    Dallas, TX
  • Agent Count

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We’ve incorporated stuff from here in some cwa scripts. https://github.com/OfficeDev/Office-IT-Pro-Deployment-Scripts/tree/master/Office-ProPlus-Deployment/Remove-PreviousOfficeInstalls
  2. @kevinjackson Sorry for never responding. I didn't get a notification email and just logged back in today. The URL for your transfer folder is: https://<your_cwa_fqdn>/labtech/transfer/xxxxxxxxxxxxxxxxxxx
  3. Been running patch 9 since Tuesday evening. All good so far.
  4. We have a script we put together for this. We built the core of it in powershell and then just use the "Script Execute" command in a CW Automate script to run our powershell. Here's what we're using: # Get list of accounts in local administrators group that are NOT allowed (this filters the allowed accounts) # We have to leave 'administrator' as it's a built-in account. But we disable the account via ThirdWall. $remove = net localgroup administrators | select -skip 6 | ? {$_ -and $_ -notmatch 'successfully|^administrator|^msp.localadmin|^xyz.devadmin$'} # remove the accounts foreach ($user in $remove) { net localgroup administrators $user /delete } We then just have this script scheduled to run against all machines at this client periodically (every 3-4 days I think).
  5. I want to publicly thank both @dfleisch and @herrchin for their engagement in this thread and conversation. It's totally OK to disagree and have different thoughts/approaches to things. As a long-time CW partner (Labtech user since 2011), I agree fully with @herrchin's suggestions and well-reasoned responses. I appreciate @dfleisch sharing the 'how we got here' information but, honestly, I think it's more telling of the failures on the CW side in properly setting standards and expectations for partners. I want to be VERY clear, that's not a jab or knock toward @dfleisch. It's a comment on CW, the company's, approach. @herrchin does a great job here explaining our (partner) frustrations with the approach CW has taken on this and while it's great they came up with some benchmarks, I think it's clear from comments by both parties that the benchmarks being used aren't actually correct for the real-life workloads seen on CW Automate servers. So to steal @herrchin's example, CW is asking us to measure apples where oranges are what's needed. That's definitely doing a disservice to those partners (current and future) who are using or looking at using Azure or other services that have 'scored badly' on the apples test. Not that anyone cares at this point, but I'll reiterate that we're happily running Automate in Azure on a single VM (8 CPU, 56GB RAM) with over 5,300 agents. And I'm about to grow that to close to 6,000 agents before the year is out. I've contemplated splitting our server roles, but to be honest, everyone that looks at our instance performance (including employees coming from other Automate shops) is blown away at the performance we have (compared to Automate in general.....we still hit the usual Automate slowdown points). Yes, it's an expensive VM. But it's how we've chosen to structure our business. It's working well for us. And it's performing awesomely. Even when paging to disk, etc. we see NO slowdown due to disk i/o. But our 'apples score' using the current diskspd recommendations is abysmal. I truly hope some CW product managers see and engage with the information from this thread. Especially the points/info raised in @herrchin's last post. THAT is how CW can improve the product and their partner satisfaction and confidence level.
  6. We don’t do this universally at this point but we certainly have it in place for our clients with no onsite servers. Doesn’t really need much horsepower. Most of ours are retired desktops or laptops from our internal uses. There’s no need or requirement for probes being on DCs.
  7. Whoops! Definitely didn't mean to post up here with our CWC Identifier in there. Thanks @mike_judd for the head's up on that. I've replaced the XML above with a generic version. I went ahead and added a 'variable set' line where you can easily add your specific CWC Identifier in each section of the script.
  8. I'm guilty of grabbing @DarrenWhite99's scripts previously and modifying and using them but never posting back. First off, Darren, YES these work on macOS quite well (or did back then). One thing we discovered recently was that the program directory on macOS was changed during a a mid-6.9 release of CW Control. So we updated our "Get GUID" script to properly grab the GUID from the old location AND the new location. This has been working well for us. Here's our updated "Get GUID" script. Technically is still supports Windows, but due to the plugin handling that directly now, we really only use this for macOS clients. We just call this script at the end of our "install" script on macOS machines to update the CWA database so the link from CWA to CWC exists for Mac agents just like it does for Windows agents. CW Control Get GUID - CUSTOM.xml
  9. We leave the maint. window set weekly even though patches only really roll out monthly. It's a 2 hour gap in 'monitoring' during an overnight period. We find that acceptable since we still get alerted when the maintenance period is over for any machines still offline (I can't think of a time that has happened in last 10 years)
  10. Just make sure you have maintenance windows defined for your patching period. In our case, our patching window runs from 2-4am. So we have a maintenance window defined during that time for all servers. No alerts then regardless of what machines are up/down since they're all rebooting during that period. @Dayrak I'm really not sure how else to explain it. We patch hyper-v hosts just like any other server. We don't treat them differently AT ALL. In the configuration of each Hyper-v VM, we make sure to choose the Automatic Stop Action as 'save.' (see screenshot). Our experience is this causes ZERO issues. Even if the VM is in the middle of patching when host reboots. The VM just starts right back up and finishes what it was doing with the VM turns back on.
  11. @dfleisch I greatly appreciate you jumping in with all of this incredibly detailed and helpful information. It's great to see CW employees contributing more and more in these forums. I don't doubt that MS is doing 'trickery' with how they implement and present some of the Azure virtual hardware. I will say though that I thought some of your comments toward the end of the post were maybe a bit more snarky than they needed to be. Or maybe I just read too much into it. I completely respect your expertise and appreciate the technical detail. At the end of the day, we're currently running over 5,300 agents on a single Azure VM and we're quite happy with the performance. But as we continue growing I'm certainly always evaluating our choices of on-prem/cloud/whatever to make sure it works well for our business.
  12. My understanding is that 'managed disks' do NOT need storage space striping to achieve great I/o. But I don't have any VMs using them at this point to confirm that. I CAN confirm that I get pretty dismal results when running that command from CW on my Azure system. Basically reporting 15.25MiB/s. I can also confirm that with 5,400 agents on our system, there's NO WAY that's indicative of our actual performance. Sorry I can't help with CW refusing to move forward though. No real ideas there.
  13. Same here. We couldn't do what we do without the awesome sharing that takes place on this forum and in Slack. Anywhere I can give back in any way, I'm 100% in. And I also love seeing/hearing what others are doing with Azure too so I know I'm not on an island!
  14. Yeah, that's often a topic of conversation here. For us we have to factor in hardware cost, colo cost (or redundancies added to office), we're already a multi-state organization so Azure helps with that some, We're pushing clients to Azure so being 'all in' ourselves helps a bit. And in the 4 years we've been in Azure, we've not had a single outage. Not that it CAN'T happen. But awfully nice never really worrying about our internal 'infrastructure.' Pricey for sure, but as long as we're pricing our services correctly, if you spread that cost amongst 5,400 agents, it's not all that bad. I'm seriously considering doing a reserved instance for this to really cut the cost down.
  • Create New...