VMware Cloud Management

Tuesday, 12 February 2019

VMware Cloud Foundation & Dell MX7000 Composable Infrastructure

Recently I have been blogging about VCF which is a product I have been very impressed by. The more customers hear about its ever increasing range of capabilities it makes this stepping stone to a hybrid cloud closer to the full package.
Having vSAN based workload domains is a really powerful building block, then in VCF3.5 came the addition of NFS based workload domains offering even greater flexibility but the ability to present composable infrastructure for workload domains also is a fantastic addition. I feel that partnership of VCF and DellEMC MX7000 will be an attractive offering for Systems Integrator and Hosting companies.
Before discussing the relationship between VMware Cloud Foundation and Composable infrastructure let me introduce the MX70000 which is the DellEMC recently launched Kinetic Infrastructure, which amongst many things has been certified as a vSAN platform. Below is an overview highlighting the main attributes of this platform.



 
MX7000 Overview & key points:

  1. 7U Chassis
  2. Max 8 Compute Nodes
  3. 6 disk slots on each compute node
  4. Maximum of 112 local disks & 6 in each compute sled.
  5. 25Gbe networking available today with 50 & 100Gbe coming soon.
  6. Group up to 10 Chassis in a single Fabric.
  7. No Midplane design resulting in no single point of failure.
  8. vSAN Certified
  9. VCF3.5 Certified.
  10. Plug & Play fabric deployment with Smart Fabric Services.
  11. Support for NvME plus M.2 for boot.
  12. Up to 28 core processors per Compute host.



So lets take a look at how you add the MX7000 Chassis or group of Chassis to the VMware Cloud Foundation SDDC Manager.

Similar to the process of  "Commision Hosts"  you need to present your Chassis hosts to the SDDC Manager.
This is achieved by configuring the Redfish translation layer URL to connect the Chassis to the SDDC.
More information on this process can be found in the VCF 3.5 documentation. See the below link

VMware Cloud Foundation v3.5 Administration

Presenting the external resource is as straight forward as that and is now ready to be consumed by VCF.

In summary customers have the ability to present up to 80 vSAN Compute nodes to VCF3.5 in addition to or as an alternative to presenting rack based ready nodes. This will not be a pairing for everyone but businesses that want to automate and be as hands off with the hardware as possible will eat up this technology.

For more information i would recommend the below links. 

VMware Cloud Foundations Operations and Administration Guide

VMware Cloud Foundation Hands on Lab

Dell EMC MX7000 Virtual Rack

Dell EMC MX7000 Overview

vSAN Hardware Compatibility List

Wednesday, 16 January 2019

The HCI Landscape is evolving fast and DellEMC VXRail is setting the pace.



For the last number of years i have been having HCI discussions with customers and partners of my employer. It started with the first release of DellXC which is the Nutanix offering by DellEMC, EVORail followed by vSAN and in recent times VXRail. The Techie in me was slow to appreciate VXRail but this is a product that has gone from strength to strength and i get it. In this modern era systems admins are fewer in companies but tasked with doing more. One of the biggest parts of this is running infrastructure. VXRail as i have written about before takes away the main time consumer and gives you back a lot of time to focus on other value add activities for your company.


The recent VXRail announcements featured the 4.7 Release which sees support for 2 node ROBO on VXRail, tighter integration into VMware vCenter & VMware Cloud Foundation 3.5 support. Availability should be April/ May time frame. VCF is something i have spoke a lot about and i am a big fan so when the announcement came that VCF3.5 would be supported on VXRail i was naturally excited.
VXRail is a turnkey solution with one of the main value propositions being the power of vSAN with strong automation providing life cycle management and automated tools to carry out daily operations. Combine this approach and layer VMware Cloud foundation on top and this evolves from a cluster orientated appliance into a hybrid cloud built on x86 hardware. The two heroes of this product are:
VXRail Manager & SDDC Manager:



In this solution the role of the VXRail manager is to manage the:

  • Life Cycle Management of all hardware
  • Health Monitoring 
  • Deployment of virtual appliances such as IsilonEdge & RPVM

The Role of VCF's SDDC manager is to compliment this by providing Life cycle Management for all Virtual components including:

  • Workload Domain Hypervisor
  • vCenter
  • NSX
  • vRealize Suite
  • SDDC Manager

The integration here is so tight that when you provide your VMware Credentials in the SDDC Manager you will have access to the VCF repository which will include a unique build for VXRail. this is similar to the bundle available for VXRack systems and other turnkey solution.



There are two additions to VCF that really caught my attention. The first of these is Stretch cluster support which is not available on current rack scale solutions. With VCF3.5 I can spread my Datacenter over two physical sites or across a campus. The 3.5 release also provides support for multi availability zones.
The second is support for NFS based workload domains which is a subject i plan to blog separately about.
The value in this feature is that i can continue to sweat existing assets that have value or are still in their life cycle while investing in modernized consumption methods.

I am going to wrap this not but plan to be very active this year with blogs so if you find this useful please share and feel free to drop a comment or thought.

Thursday, 15 November 2018

The Software Defined Revolution through a Dell EMC Lens




The year is 2018 and we are riding the wave of this Digital transformation era. If you are attending technology events, semimars or just spending time listening to podcasts you will be well aware that its a software defined everything. Leading the way in this space has been companies including VMware, Nutanix, Dell EMC with stiff competition from some really good companies. Its an exciting time to be involved in this field. the motivation for me to write this post was looking at the recent announcements at VMworld for the jointly engineered  platform by VMware & Dell EMC which is VXRail.
VXRail has been riding a wave of success over the last couple of years and now this month at VMworld came the announcement that VMware Cloud Foundation 3.5 will be supported on this platform. In my opinion this is the fur coat on an already well finished solution. let me try and explain this a little. VXRail is a vSAN based appliance that boasts an easy button deployment wizards making you operational in hours instead of days. Users of VXRail are running vSAN 6.2 and 6.6 depending on whether they have upgraded or not. Scaling this solution is straight forward, if you need additional compute you can scale 1 node at a time and with several different flavours such as VDI optimized nodes you have choice! In addition to scaling out the cluster VMware Stretched Cluster is also supported allowing customers consume this technology in an Active/Active Topology, finally VXRail has a one click upgrade process which is the real power. A single bundle can take your entire environment from vSphere 6.2 up to a vSphere 6.5 environment running vSAN 6.6 as an example. in simple terms what does this mean? It means that systems admins dont have to worry about keeping the lights on anymore , they can focus on adding value to their companies.
Fast Forward a little and Dell EMC & VMware announced support for VVD on VXRail, a set of blueprints & best practices to ensure you are configured in the best possible way. Complimenting this now is the announcement that VCF3.5 will be supported on VXRail. This in my opinion completes this solution. Why? VCF is deployed aligned to VMware Validated Design (VVD 4.X) with the added capability of NSX-T. At this stage some people might be thinking this guy is talking about VXRack. I am not. these are very different approaches. VXRack is very rigid as a solution but has rack scale in its favour. The value in VXRail & VCF is clear.


  1. 1. Bring your own Networking
  2. 2. Management workload domains can be run on 4 nodes
  3. 3. Support for NSX-T
  4. 4. HCX - providing a route to public Cloud
  5. 5. Workload domain for Kubernetes
  6. 6. NFS Based workload domains 
  7. 7. Support for Stretch Cluster

Taking the above key features as I see them which are all governed by the new look SDDC Manager and we add the VXRail Manager into the equation now we have end to end Management with a heavy volume of orchestration. A couple of these updates are really interesting.
Referencing point 6. In the last couple of days Dell EMC announced that the midrange Storage product Unity was certified for use with VMware Cloud Foundation 3.5. this offers flexibility especially for Service Providers who often have predefined bundles per customer type. now they can use vSAN based workload domains and can use NFS based workload domains with a wide range of Unity models to pick from. 
https://blog.dellemc.com/en-us/dell-emc-unity-is-first-storage-platform-validated-with-vmware-cloud-foundation/
Referencing Point 7. i attended a session by Daniel Koeck at VMworld in Barcelona where he walked through configuring a stretch cluster using VCF. its early days for this feature but it offers flexibility unseen by the rack scale solutions provided by the industry leaders.   


I am really excited about this release and over the coming weeks i will be testing VCF 3.5 on vSAN with NFS WLD's. I plan to evaluate as much of the above as possible and will do my best to share that information.

If you would like to learn more about VMware Cloud Foundation following the below link:
https://my.vmware.com/en/web/vmware/evalcenter?p=cloud-foundation-18-hol

Learn about VXRail at the below link:
https://www.dellemc.com/en-ie/converged-infrastructure/vxrail/index.htm

I hope you are as excited as I am , if you like this post please take a minute and share.


Saturday, 10 November 2018

VMworld Barcelona 2018 Recap



Last week i attended VMworld Barcelona and it was a cracker of a week. I felt this year it was a more packed agenda with extra breakout sessions along with many more activities both at the Vira Gran Via centre and also at nearby venues. My first thought at the end of the event was i can breath now, it was such a whirlwind week. It kicked off on Monday which is partner day. An excellent keynote from Jean Pierre Brulard and Pat Gelsinger spoke of the shift from migration to public cloud to businesses looking at a hybrid cloud approach which in my opinion is the best approach as the TCO from public cloud is still far greater than on premise.

My morning kicked off with the mandatory coffee but also a roundtable discussion with Barry Coombs of computerworld and some of his partners and customers where we chatted about hyperconverged technologies and their direction and the role of block storage in the modern datacenter. this was a super positive discussion which set me up for the week. this was followed by a session covering vSAN native data protection and how to protect mission critical applications. I had some more customer meetings that day and took some time to check out the VMTN village and the catch up with those friends you see once per year at this event.


Tuesday was really the kick off where again i had a packed agenda attending 4 really good sessions and taking a walk through the partner exchange which opened on tuesday. I took my usual approach and worked around the outside smaller booths chatting with various venders about their offerings. Reddis in memory databases was one that stuck out identifying a good use case for this in IOT edge compute. Standout sessions this day were "Virtual Volumes deep dive" delivered by Pete Flecha, Technical architect at VMware and co presenter of the super Virtually speaking podcast. What was great about this session was the pitch. the message here was even if your customers are not ready for HCI get them started on SPBM by having them work with vVOL's and see how it reduce management overhead tasks and simply storage configuration by surfacing capabilities via VASA and consuming them from vSphere. get on the train!

Later that evening Pete and John Nicholson delivered a vSAN Deep dive session , although the last session of the day it was well attended and really showcased SPBM and delved into the support machine that backs this platform. Being a vSAN fanboy these sessions were right up my street. While the content was familiar to me it was great to see these guys deliver and learn how i can better deliver this content on a daily basis.

Wednesday was also a big day where i caught some sessions on NSX, Kubernetes and a cool keynote co presented by Yanbing Li and Duncan Epping. this session spoke of the future of the SDDC plane. this session highlighted how easy it is to configure a Hybrid Cloud and move workloads between private and Public leveraging VMwares HCX and VMware Cloud on Amazon. this is clever tech and it clearly moving at a phenomenal rate so looking forward to whats coming next. The other interesting take away here was Native File services on vSAN. simple to configure and gives customers choice of protocols.

The final day of VMworld 2018 admitedly i arrived a little tired after the previous 3 days and attended Daniel Koeck's Stretch cluster using VMware Cloud Foundation on vSAN and yes! it was a damn good session. This for me was so relevent as earlier during the week Aaron Buley announced support for VCF3.5 on Dell EMC VXRail and in Europe with the high standard of networks available Stretch cluster is really popular. I truely believe this is a winning combination. Daniel did a live demo configuring a stretch which was on point. i like this type of delivery and will be configuring this back in my lab. the other session to call out from this day was a 45 minute brain dump of NSX overlays for vSAN by the great Myles Gray. this really was a cool session as Myles worked through Layer2, its limitations and how layer 3 solves this. this led to identifying the limitations layer 3 presents at a VM level and how using network overlays at L3. very resonant session with me and it will help me deliver NSX pitches with my customers. Oh and BUM Flooding got a giggle from a portion of the room.

Before the event ended i spent a little more time at the solutions exchange talking to companies like Druva, Cohesity and learning about their offers and where they are going.

In summary this was a fantastic week, im tired but so motivated for the months ahead and using what i learned to develop myself and those i work with. I got to meet people i look up to a lot in this industry , reconnect with others i see once a year and also meet new people. To those of you who have not attended a VMworld i would say make it a priority for your development plan for the year ahead. like everything you get out of anything what you put into it. i have new business contacts, friends and lots of ideas for projects. hopefully i am back at VMworld 2019 so thank you all and see you soon.




Oh final Shout out to Kim Bottu - The Swagmeister :)  
 
 
 
 

 

 

Monday, 29 October 2018

Stretch Cluster - The Importance of Bandwidth sizing for HCI Solutions.


Building a Stretch Cluster is simple right? 2 data sites and a witness site. the role of the witness site is solely to hold metadata. In recent editions of vSAN this witness appliance has been simplified however when I look back at all of the discussions I have been having with customers I have to put my hand up and say that discussing bandwidth requirements is often overlooked or just left to the later stages of the solution design. with this in mind I decided to take a look and see what resources there are out there. the best source for all information related to vSAN is:
            

I referenced this sites material for the design considerations. So what do you need to consider? Start with a Live Optics or a similar tool that will allow you profile your current resource consumption. typically these tools looks at Compute, Storage and Network statistics monitored anywhere from 4 hours to 7 days.



Taking an example of a customer who has two sites with an IO requirement of 100,000 IOPS for their 200 Virtual Servers, the customer proposes to split the cluster in 4+4+1 ( The 1 is the witness location which will hold meta data only). lets assume a 70/30 split, so 30% of the bandwidth is write 30,000 IOPS. lets also make the assumption that the average IO size is an 8K block.

VMware have done a great job providing the sizing guidance so taking the above parameters and injecting them into the Bandwidth formula below lets explain this example. Key point here is that reads are not factored into the sizing exercise.

                                                               B = Wb * md * mr 

  B = 1920Mbps x 1.5 x 1.25 = 3.6 Gbps


3.6 Gbps equals the bandwidth requirement between the Data Sites based on the data profile. Adding Servers changing protection polices etc can result in greater requirements.


Next we need to calculate the bandwidth requirements between the data site and the witness site. Note that this traffic will be lower as its meta data only. again there is a formula to assist with this calculation.

  1138 B x NumComp / 5 seconds

  • NumComp: Number of components. This will be determined by the number of stripes used. the more stripes, the more components to be factored.
  • 5 Seconds: in the event of a Failure site fail-over to the secondary site must occur within 5 seconds. 
If you recall the estate contains 200 virtual servers and we are making an assumption here that there is a stripe width of 1 and 3 components for each Virtual Machine.

                            1138 B x 8  x 1200 / 5 = 2.18Mbps

There is one final calculation to be made here. As VMware recommends loading this with an additional 10% buffer our total requirement will be 2.4Mbps

In summary our Stretched cluster solution was generating the following for the data to data site.
  • Virtual Servers: 200
  • Stripe width: 1
  • Components Per VM: 3
  • Total IOPS: 100,000
  • Read/Write Ratio: 70/30%
  • Block size: 8K
  • Total bandwidth requirement for data to data site: 3.6Gbps.
  • Total bandwidth requirement for data to witness site: 2.4Mbps
There are several references to buffers here and these can be called into action when an administrator makes a change to a policy or performs VM creation operations. As the data requirement between the data site and witness is so low there is no need for a special link or high performance link and more and more customers are tending to host the witness from a cloud service or a branch office with minimal infrastructure in place. The Data site however is critical to get right. 5 ms round trip latency is the max supported so really a distance of less than 100KM is required and preferably with QOS. This link is supported on both Layer 2 and Layer 3 links.

So that's it, that's my post on getting your bandwidth sizing right. If you want a breakdown of the formula parameters they are listed below.

Note: All parameters used are explained below.
  • B =   Wb *md * mr
  • B =   Bandwidth
  • Wb =Write Bandwidth
  • md = Data Multiplier
  • mr =  resynchronization multiplier
VMware recommends a Data Multiplier of 1.4 however we will use a multiplier of 1.5 to keep this nice and simple. I guess your asking now why the re synchronization multiplier is included? this is here to factor events such as failure states or object policy changes that can invoke a resync. it really is key to understand that all changes in a vSAN environment will amplify back-end operations which need to be factored in. Also the resync traffic should be set at 25% minimum

If you found this post useful can you please share!








Monday, 22 October 2018

Building a software defined datacenter with VMware Cloud Foundation - Part 3


Finally getting to part 3 of this series on VCF3.0. There is some information i cannot share regarding this release but if you are attending VMworld in Barcelona next month watch out for some really cool updates surrounding this VCF release. The first thing is the new look UI which is customizable but the default views are pretty comprehensive.



On this UI there is the addition of a button to "Commission Hosts", the prep work from part 1 of this series covers the prerequisites here.



Choose "Commission Hosts" and you will get the below checklist.


The big ommision in this release is on the workload domain creation. the option to create a "VDI Workload" domain, not sure why this as been dropped but documentation just states that you should create a "VI workload domain" and add the View components manually.
SO on the UI now all options are menu driven from the left side of screen as can be seen below. without going through each one of these i have picked out the interesting options.


Under "Inventory" we get a full breakdown of all domains created including management domains and all resources consumed on the cluster.


 There is also a hosts only view for CPU,MEM & Network view of each resource, please see below:



Under the "Repo's" tab you have the option to add your VMware Account Credentials and all updates will be pulled directly to the SDDC manager. the Pull is not version considerate so you will expect to see bundles for all versions of VCF including versions for solutions such as Dell EMC VXRack.



Finally in this post lets take a look at the administration tab. you have the usual options here such as security, account management, licensing etc




So lets look at the vRealize Suite view for now. when you choose this option you have the ability to deploy

          • vRealize Log-insight
  • vRealize Operations
  • vRealize Automation

the combination of both Log-Insight and vROPS offers all of the monitoring and remediation capabilities you require for day to day management of your environment.

vRealize Automation is certainly worth a series of posts as its a large , powerful and complex product v7.5 has just been released so it will be interesting to review this alone. the good thing here is there is not requirement to deploy all 3 together, simply move at your own pace and deploy the product you are ready to use and then adopt a phased approach to roll out. My recommendation here is get the ops piece in place and then move on to the automation components.

These wizards allow you easily deploy the full suite.


so that is it, short and sweet. the next post will look at deployment of the vRealize Suite with Operations as the first post. This is a product that has gotten a major overhaul and has a road-map clearly aligned with vSAN. FYI here if you are not aware of it, vROPS dashboard for vSAN are included in your vSAN license now. if you are not already using them please take a look as they contain tonnes of great information relevant to your clusters.

thats it for now. thanks for reading and if you liked this post please share.





Saturday, 13 October 2018

Building a software defined datacenter with VMware Cloud Foundation - Part 2

When you deploy the Cloud Builder VM , browse the below address and login to the appliance
https://Cloud_Builder_VM_IP:8008.



Enter your Credentials, accept the agreement and choose download parameters worksheet.



Once the document is completed its time to generate the json file. to do this you will need a tool like WINSCP to allow you transfer your .XLS template to the appliance. Once this is complete issue the following before copying the newly generated .JSON file back to your deployment VM.

Open a putty session and connect to the Cloud Builder VM


  • sudo cp /home/admin/xlsx_file /opt/vmware/sddc-support/cloud_admin_tools/JsonGenerator

  • cd /opt/vmware/sddc-support/cloud_admin_tools/JsonGenerator/


  • sudo python JsonGenerator.pyc -i /opt/vmware/sddc-support/cloud_admin_tools/JsonGenerator/xlsx_file -d vcf-ems 


  • sudo mv /opt/vmware/sddc-support/cloud_admin_tools/Resources/vcf-ems/vcf-ems.json /home/admin/


  • sudo chown admin:users /home/admin/vcf-ems.json

Using a file transfer utility, download the /home/admin/vcf-ems.json file to the machine you are building from.

Now, we need to log back into the Cloud Builder and upload the .JSON and choose "Validate"
Assuming all of your parameters are correct the valdiation will pass otherwise its Troubleshooting Time!!!!!!!!!
Next step is to begin the bring up service.


BOOOOYAH! you now have a fully operational death star, i mean software defined datacenter. in the next installment we will login and review the new look SDDC Manager UI.