Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

My Information

  • Agent Count

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am not sure about it being a security risk, to have the code Having access under NDA would be good, more eyes can help people get it fixed, also we can get it running on new platforms I have a large number of Power Linux Servers (RHEL on IBM Power CPU's) in house and at my customer sites, We also have zLinux Systems (RHEL on IBM Z series), both the of the above would need source code in order to compile it for these platforms (Both are Big Endian) We also do a lot of work with FreeBSD on many different cpu architectures (Power, OpenPower, Sparc, Arm, Intel, MIPS) this would require a good amount of work in the source code to make it fully portable, My team has the ability to do this and has done it for many applications.
  2. They bought Ksplice several years ago (Rhel 5 days) they where a bunch of ex IBMers from the mainframe division Where such things have been ho hum day to day stuff for years. They wanted to get this code out to the general community, and the Mainframe Linux Users, but there VC sold to Oracle and that ended up getting the sucked in to Larry's Evil Empire. (I am not a big fan of oracle but i will use there free / cheaper services to better my systems KSplice is one of them) I even used the kSplice stuff to do a live upgrade from 6 to 7 the machine has not been rebooted since 6 went into production ( uptime this morning 1,572 days)
  3. FYI, I am running LT10 on a split front end backend with Linux based Percona DB XtraDB Cluster with multi master replication (8 way between my 3 data centers) Works very well. Not really aiming for speed, in my setup was about keeping the data safe and availability if we had a failure.
  4. this was about a year ago, they have been trending for the last year about 6 - 8 weeks behind RHEL on patches Oracle is about 2 weeks Oracle linux is a good base free with lots of enhancements including a custom kernel that is ported from current, so it tends to have a bunch new stuff when it comes to performance and file systems Also if you buy there support contract (about half as much as Red Hats) you get ksplice which lets you do live kernel updates no reboot needed!
  5. The preferred platform are RHEL derivatives, that includes Centos I do not recommend Centos Specifically because they are slow to patch since the hiring of the core developers by red hat in an attempt to kill the project (the Red Hat employees we know HATE Centos and Oracle, to a nearly religious fever) If you want a free RHEL derivatives
  6. Question to the community, how many people would be interested if some one built a turnkey solution for this? and How much would it be worth to you? Also would you prefer such a solution as software + support or the Whole kit hardware included?
  7. I am also interested in this, Dave would you be able to give me some details on this
  8. Linux+ is super basic and they try to be agnostic to a distrbution this means you will under stand the concepts but not the real day to day stuff If your going to invest in training and want to go the enterprise linux route go here http://www.redhat.com/en/services/training/courses-by-curriculum#Red-Hat-Enterprise-Linux RedHat is the 900lb gorilla in the enterprise market, with Suse being a close second. Debian and its related distro Ubuntu is popular in the education market and with hobbyist market (it is also home of the Open Source Purists)
  9. it can be made to do that initaly it is easy onsite with two boxes (or VM's on diffrent hosts and datastores) Word of caution if your not familiar with clustering out side of the microsoft world it is diffrent, and has its own quirks Most of the Microsoft cluster came from ideas used in the Unix/Linux world but they implemented totally differently. If your going to look at a cluster of MySQL server you want to use a Maria based 5.x distribution For clustering and HA my favorite is Percona XtraDB Cluster, this is open source and you can buy a support contract on it The family of clustering is the same as Maria using Galera cluster (WSREP, is the API used) It unlike microsoft clustering is Shared Nothing Multi Master the only thing the servers need in common is the same root PW and config file Currently the only clustering for MySQL (and compatible) Databases that will work with LabTech are *Nix Only, this will not run on a windows serve I think for anyone looking at doing this or similar start by moving your DB to a linux box first then we can add on clustering later, It is so easy to hang ones self when working with HA systems, no end of problems can come if you do not know what your doing with this I am happy running it my self, as I have had years of managing linux clusters. but this is not somthing for a first timer yet... If you want to embark down this road First thing I would recommend is to Read the manual http://www.percona.com/doc/percona-xtradb-cluster/5.6/, I can help you with the finer points, once you have gone down that road Step 2, get 2 boxes running with a RHEL based linux (Centos or Oracle) you can do it with debian but I am not a Debian guy and they are just diffrent enough that it can throw you for a loop. Box Specs Memory, This is a big player in your system it should be equal to what LT recomends (the OS will use less so you can knock it back by 4gb if you want) ECC is a MUST, do not try to do this with out ECC Memory CPU, more the better but the Linux Code is more CPU efficient the the windows code, you can cut the LT recommendation by half Os Drive 80gb Minimum Mirrored Volume Data Drive 10k RPM Mirrored same size as LT Recommendation Logs, and Caches should be on SSD (If your data set is small enough or your wallet big enough you can put the Logs and the Data on SSD) The log and Cache Drives must be Mirrored (3 way if possible with a spare) (Loss of this data will be catastrophic you will have to restore from a dump, unless another member of the cluster survives) 1gb lan is sufficient for most uses , when you get to the cluster setup, you will need a high quality backend switch (We are talking cisco/juniper no netgear) this switch will only carry data to and from the LT Server and the cluster members UPS is a must, this should be a high quality Full Online (True Online, Not Line Interactive) Liebert Gxt3 or 4 is best Dual power supplies are recommended these should go to different UPS units The Quality of the system you use will effect the quality of service you get, no desktop class machines, stick with an HP DL3xx or Cisco UCS Cxxx (not the entry level Cxx). Stay away from levnovo (I know they have great margins, and they used to be IBM but they are not IBM any more the quaility for this kind of stuff is not there anymore) Dell servers are decent as well I just do not deal in them so I cant quote models That should get you going
  10. Lets talk about the high level steps to do this (for a single server) A few configurations need to be done on your lab-tech server They are [list=]Download MySQL Proxy (You will need this for the marketplace and other apps that expect a local MySQL server) Retrive the MySQL Root password from the registry Change the Database Host variables in the registry Stop the Labtech Services Dump the Local MySQL database On the DB Host Install Maria DB 5.x tree from your distributions package manager Run Mysql_Secure_Install set the root password to the same as the one as you got from the registry on the Labtech host if you look at the my.ini file in your labtech mysql directory the majority of those can be copy and pasted directly into the my.cnf file (Usualy in /etc on linux or /usr/local/etc on FreeBSD) with the exception of the data path, on my install the config referenced non existent ssl key files do not copy those statements into the linux server as that will result in a fatal error. Lastly you must add lower_case_table_names = 1 to the end of the [mysqld] section Restart the mysql service import the dump file from the labtech server Configure the mysql proxy to listen on the labtech host on the loopback interface on port 3306 start the labtech services, you should be good More to come
  11. So far so good with the Codeweavers solution Next step on the project was getting in the L: functionality We have two angles depending on how you want to go Option 1. is to mount the LT share on your mac then map the local folder in codweavers to the L: Option 2. is to host the LTshare on a cloud service that can then sync your folder to the mac and then map the local folder to the L: in codeweavers Option 1 keeps the data in your firewall, but requires a VPN, we where trying to avoid this as it can be clunky and as we all know to well SMB over a vpn can be very very slow Option 2 puts potential security risks in place depending on the provider, but gets SMB out of the mix and keeps the file system load off the LT server We went option 2 using BOX which has HIPPA,GLBA,SOX,FRCP compliance and we already use it for other data. The mapping process on the mac side in code-weavers is very easy, launch crossover office switch the view to icon view Select the Labtech Bottle (What ever you called it by default it will be Labtech Control Center) Select WINE Configuration Select Drives Click Add Select the drive Letter you want (In this case L) when you get back to the drive configuration, high light the L: which will default to your root path) In the edit box below enter the full path to the synced or mounted folder (In the case of a mount on a mac it will be /Volumes/LTShare) Click apply and Ok Relaunch Control Center and voila you have the L:
  12. I am working on a project during my initial Implementation of LabTech to obtain the same level of availability our previous monitoring and management platform has (we are averaging 5 nines at this point with the old system, but it has become to costly to maintain and does not have tools we need) Below is a 10,000 foot picture of my setup, this does not have many of the finer details, the Database cluster (galera) was already existing along with the BI system. I am still working through some issues with the initial database load, what we did was a full dump of the Databases (mysqldump --all-databases -uroot -p{Password in the registry] ltdump.sql) the imported it into the cluster, using mysql proxy we disabled the local MYSQL database and pushed the queries from localhost to the cluster load balancer It is worth noting that we are using HAProxy on freebsd with carp to handle the Load balancing, we have in the past used GreenSQL as that helps with compliance issues, but for this purpose we are sticking the HAProxy. I am posting this so those interested might try and hack at it and we can share some info on getting it running Ultimately my company may go the route of creating a turn key solution for this as we do for other apps for our customers. Here is a new diagram, this one is a bit simpler and shows what a typical HA Database setup would look like, my setup is very complex due to the nature of what we do. the above does not cover HA of the lab tech front end itself just the database backend, it also covers for massive scalability of the database
  13. Hi, New to the labtech world, just starting our initial migration from N-Able In the process of doing this me and My team have done a fairly complete wrapper for the control center that will get it to run on OSX (and probably Linux as well) The only bug we know of is chat with client will cause an exception to be thrown and you will be given the option to terminate the control center or continue (but the chat will not load) We are using the commercial WINE implementation from Codeweavers I would like some testers of this if possible, I have not tested on Linux on Mac but your welcome to do so Trial for Codeweavers Crossover environment Mac: https://www.codeweavers.com/products/crossover-mac/download/ Linux: https://www.codeweavers.com/products/crossover-linux/download/ Once it is installed please go to https://www.codeweavers.com/compatibility/browse/name/?app_id=10949 Step two download the cross tie (this is an xml script that build and installs the environment for the control center on your machine) On mac once you have the cross tie file double clicking on it will load cross over and give you a warning saying this is an unsupported script (Need at least 100 paid users before there support will sign the file) It will start building the environment once you click ok this includes downloading and installing microsoft .net (2.0, 3.5), Microsoft DAC 2.8 SP1, and then downloading the Control center 2013 installer You will have to click through all the EULA's You should then be able to launch control center. I have this running on an older mac (Mid 2007, core 2 duo 8gb ram) it take about 10 mins to load on My newer machines (Late 2012 Core 17, 16gb) it takes about 20 seconds, and my mac pro (2013, xeon 64gb) is about 10 seconds A forum that is monitored by Codeweavers staff and myself is at https://www.codeweavers.com/compatibility/browse/name/?app_id=10949;forum=1 you can post here as well. Give it a shot and let me know Steven Peterson
  • Create New...