Migrating print server from 2016 to 2022 Windows.Today in History: 1911 - Roald Amundsen becomes first explorer to reach the South PoleJohnny Horton famously sang "North to Alaska" and while I know it is nowhere near the South Pole, I still thought of it while reading this. Spark! Pro series 14th December 2023 Spiceworks Originals.There are applications, like Exchange where I do application level availability on top of VM level HA. There is no way that I am doing anything less than HA of the VMs from the hypervisor. It's a 24 x 7 business that has to be up and running all the time, even when we move buildings and lose utility power for 48 hours. I consume about 12 TB for my VMs and all their data. I run 100 active VMs, with 150 total in the cluster (I turned off all the extraneous test machines to save on IOPS). Now that I have stopped doing NTFS deduplication and stopped running active Exchange on my hyperconverged cluster I can start turning some features on for a few VMs that should help performance and see how it goes. I do have CPU cycles available on the hypervisor to do compression, and probably deduplication as well. Even if I do buy Nimble I will need to do that because I would be taking my hyperconverged stack to my DR site to replace those aging servers and Dell EqualLogic storage. Before I drop another big PO on them it will certainly be worth my while to do more tuning and testing to see if I can make it work. To find out that it wasn't performing up to snuff with the initial workload was a pretty big disappointment. It was less than 12 months ago that I made the purchase, and I figured that it would get us through at least 5 years. Believe me, I am looking at all the options with the current vendor. I need to know that a full backup of my Exchange server of file servers isn't going to break my environment. I wasn't as worried about the random workloads of Nimble as much as the streaming backup workload. How much utilisation do you have on the CPUs? If you have the cycles to burn then I would definitely use them. This is massive d ifference and will lead to much more data coming from the slow tier. It seems you have 3 hosts so you effectively have 2400GB of cache which could've been 7200GB (or 12000GB) of cache with compression and deduplication. Have you spoken to the vendor in question? Depending on the solution your steps of turning off compression and deduplication at the SSD tier to increase performance could've had the exact opposite affect as you are reducing the effective capacity of the SSD tier by a factor of 3-5x (depending on your specific data). The concern is that while it may buy us back some performance we seem to be running close to the performance edge, and we should have performance to burn. I do plan on turning it back on for a few VMs and see if the system still performs well. We had turned off compression and deduplication of the SSD tier to try to help with the performance issues. That would possibly solve my problem, but doesn't seem to be an option. It would be great if I could just replace my 800 GB SDs with larger ones. Now I am looking at either buying another very expensive brick (more than the cost of my first 3 bricks) or moving in a different direction. I have plenty of capacity for data, but too many slow reads are coming from the NL tier. As soon as I moved my Exchange data and file server to the new system off EqualLogic performance seriously tanked, as in a significant number of my Outlook clients couldn't connect because of latency issues. I thought that we had sized it properly, and everything was fine running 100 VMs. Hyperconverged still makes sense on paper and seems to be really accelerating. Was about to buy when a hyperconverged vendor came in with really good pricing and it really looked like the future and we jumped.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |