VMware Cloud Management

Thursday, 15 November 2018

The Software Defined Revolution through a Dell EMC Lens




The year is 2018 and we are riding the wave of this Digital transformation era. If you are attending technology events, semimars or just spending time listening to podcasts you will be well aware that its a software defined everything. Leading the way in this space has been companies including VMware, Nutanix, Dell EMC with stiff competition from some really good companies. Its an exciting time to be involved in this field. the motivation for me to write this post was looking at the recent announcements at VMworld for the jointly engineered  platform by VMware & Dell EMC which is VXRail.
VXRail has been riding a wave of success over the last couple of years and now this month at VMworld came the announcement that VMware Cloud Foundation 3.5 will be supported on this platform. In my opinion this is the fur coat on an already well finished solution. let me try and explain this a little. VXRail is a vSAN based appliance that boasts an easy button deployment wizards making you operational in hours instead of days. Users of VXRail are running vSAN 6.2 and 6.6 depending on whether they have upgraded or not. Scaling this solution is straight forward, if you need additional compute you can scale 1 node at a time and with several different flavours such as VDI optimized nodes you have choice! In addition to scaling out the cluster VMware Stretched Cluster is also supported allowing customers consume this technology in an Active/Active Topology, finally VXRail has a one click upgrade process which is the real power. A single bundle can take your entire environment from vSphere 6.2 up to a vSphere 6.5 environment running vSAN 6.6 as an example. in simple terms what does this mean? It means that systems admins dont have to worry about keeping the lights on anymore , they can focus on adding value to their companies.
Fast Forward a little and Dell EMC & VMware announced support for VVD on VXRail, a set of blueprints & best practices to ensure you are configured in the best possible way. Complimenting this now is the announcement that VCF3.5 will be supported on VXRail. This in my opinion completes this solution. Why? VCF is deployed aligned to VMware Validated Design (VVD 4.X) with the added capability of NSX-T. At this stage some people might be thinking this guy is talking about VXRack. I am not. these are very different approaches. VXRack is very rigid as a solution but has rack scale in its favour. The value in VXRail & VCF is clear.


  1. 1. Bring your own Networking
  2. 2. Management workload domains can be run on 4 nodes
  3. 3. Support for NSX-T
  4. 4. HCX - providing a route to public Cloud
  5. 5. Workload domain for Kubernetes
  6. 6. NFS Based workload domains 
  7. 7. Support for Stretch Cluster

Taking the above key features as I see them which are all governed by the new look SDDC Manager and we add the VXRail Manager into the equation now we have end to end Management with a heavy volume of orchestration. A couple of these updates are really interesting.
Referencing point 6. In the last couple of days Dell EMC announced that the midrange Storage product Unity was certified for use with VMware Cloud Foundation 3.5. this offers flexibility especially for Service Providers who often have predefined bundles per customer type. now they can use vSAN based workload domains and can use NFS based workload domains with a wide range of Unity models to pick from. 
https://blog.dellemc.com/en-us/dell-emc-unity-is-first-storage-platform-validated-with-vmware-cloud-foundation/
Referencing Point 7. i attended a session by Daniel Koeck at VMworld in Barcelona where he walked through configuring a stretch cluster using VCF. its early days for this feature but it offers flexibility unseen by the rack scale solutions provided by the industry leaders.   


I am really excited about this release and over the coming weeks i will be testing VCF 3.5 on vSAN with NFS WLD's. I plan to evaluate as much of the above as possible and will do my best to share that information.

If you would like to learn more about VMware Cloud Foundation following the below link:
https://my.vmware.com/en/web/vmware/evalcenter?p=cloud-foundation-18-hol

Learn about VXRail at the below link:
https://www.dellemc.com/en-ie/converged-infrastructure/vxrail/index.htm

I hope you are as excited as I am , if you like this post please take a minute and share.


Saturday, 10 November 2018

VMworld Barcelona 2018 Recap



Last week i attended VMworld Barcelona and it was a cracker of a week. I felt this year it was a more packed agenda with extra breakout sessions along with many more activities both at the Vira Gran Via centre and also at nearby venues. My first thought at the end of the event was i can breath now, it was such a whirlwind week. It kicked off on Monday which is partner day. An excellent keynote from Jean Pierre Brulard and Pat Gelsinger spoke of the shift from migration to public cloud to businesses looking at a hybrid cloud approach which in my opinion is the best approach as the TCO from public cloud is still far greater than on premise.

My morning kicked off with the mandatory coffee but also a roundtable discussion with Barry Coombs of computerworld and some of his partners and customers where we chatted about hyperconverged technologies and their direction and the role of block storage in the modern datacenter. this was a super positive discussion which set me up for the week. this was followed by a session covering vSAN native data protection and how to protect mission critical applications. I had some more customer meetings that day and took some time to check out the VMTN village and the catch up with those friends you see once per year at this event.


Tuesday was really the kick off where again i had a packed agenda attending 4 really good sessions and taking a walk through the partner exchange which opened on tuesday. I took my usual approach and worked around the outside smaller booths chatting with various venders about their offerings. Reddis in memory databases was one that stuck out identifying a good use case for this in IOT edge compute. Standout sessions this day were "Virtual Volumes deep dive" delivered by Pete Flecha, Technical architect at VMware and co presenter of the super Virtually speaking podcast. What was great about this session was the pitch. the message here was even if your customers are not ready for HCI get them started on SPBM by having them work with vVOL's and see how it reduce management overhead tasks and simply storage configuration by surfacing capabilities via VASA and consuming them from vSphere. get on the train!

Later that evening Pete and John Nicholson delivered a vSAN Deep dive session , although the last session of the day it was well attended and really showcased SPBM and delved into the support machine that backs this platform. Being a vSAN fanboy these sessions were right up my street. While the content was familiar to me it was great to see these guys deliver and learn how i can better deliver this content on a daily basis.

Wednesday was also a big day where i caught some sessions on NSX, Kubernetes and a cool keynote co presented by Yanbing Li and Duncan Epping. this session spoke of the future of the SDDC plane. this session highlighted how easy it is to configure a Hybrid Cloud and move workloads between private and Public leveraging VMwares HCX and VMware Cloud on Amazon. this is clever tech and it clearly moving at a phenomenal rate so looking forward to whats coming next. The other interesting take away here was Native File services on vSAN. simple to configure and gives customers choice of protocols.

The final day of VMworld 2018 admitedly i arrived a little tired after the previous 3 days and attended Daniel Koeck's Stretch cluster using VMware Cloud Foundation on vSAN and yes! it was a damn good session. This for me was so relevent as earlier during the week Aaron Buley announced support for VCF3.5 on Dell EMC VXRail and in Europe with the high standard of networks available Stretch cluster is really popular. I truely believe this is a winning combination. Daniel did a live demo configuring a stretch which was on point. i like this type of delivery and will be configuring this back in my lab. the other session to call out from this day was a 45 minute brain dump of NSX overlays for vSAN by the great Myles Gray. this really was a cool session as Myles worked through Layer2, its limitations and how layer 3 solves this. this led to identifying the limitations layer 3 presents at a VM level and how using network overlays at L3. very resonant session with me and it will help me deliver NSX pitches with my customers. Oh and BUM Flooding got a giggle from a portion of the room.

Before the event ended i spent a little more time at the solutions exchange talking to companies like Druva, Cohesity and learning about their offers and where they are going.

In summary this was a fantastic week, im tired but so motivated for the months ahead and using what i learned to develop myself and those i work with. I got to meet people i look up to a lot in this industry , reconnect with others i see once a year and also meet new people. To those of you who have not attended a VMworld i would say make it a priority for your development plan for the year ahead. like everything you get out of anything what you put into it. i have new business contacts, friends and lots of ideas for projects. hopefully i am back at VMworld 2019 so thank you all and see you soon.




Oh final Shout out to Kim Bottu - The Swagmeister :)  
 
 
 
 

 

 

Monday, 29 October 2018

Stretch Cluster - The Importance of Bandwidth sizing for HCI Solutions.


Building a Stretch Cluster is simple right? 2 data sites and a witness site. the role of the witness site is solely to hold metadata. In recent editions of vSAN this witness appliance has been simplified however when I look back at all of the discussions I have been having with customers I have to put my hand up and say that discussing bandwidth requirements is often overlooked or just left to the later stages of the solution design. with this in mind I decided to take a look and see what resources there are out there. the best source for all information related to vSAN is:
            

I referenced this sites material for the design considerations. So what do you need to consider? Start with a Live Optics or a similar tool that will allow you profile your current resource consumption. typically these tools looks at Compute, Storage and Network statistics monitored anywhere from 4 hours to 7 days.



Taking an example of a customer who has two sites with an IO requirement of 100,000 IOPS for their 200 Virtual Servers, the customer proposes to split the cluster in 4+4+1 ( The 1 is the witness location which will hold meta data only). lets assume a 70/30 split, so 30% of the bandwidth is write 30,000 IOPS. lets also make the assumption that the average IO size is an 8K block.

VMware have done a great job providing the sizing guidance so taking the above parameters and injecting them into the Bandwidth formula below lets explain this example. Key point here is that reads are not factored into the sizing exercise.

                                                               B = Wb * md * mr 

  B = 1920Mbps x 1.5 x 1.25 = 3.6 Gbps


3.6 Gbps equals the bandwidth requirement between the Data Sites based on the data profile. Adding Servers changing protection polices etc can result in greater requirements.


Next we need to calculate the bandwidth requirements between the data site and the witness site. Note that this traffic will be lower as its meta data only. again there is a formula to assist with this calculation.

  1138 B x NumComp / 5 seconds

  • NumComp: Number of components. This will be determined by the number of stripes used. the more stripes, the more components to be factored.
  • 5 Seconds: in the event of a Failure site fail-over to the secondary site must occur within 5 seconds. 
If you recall the estate contains 200 virtual servers and we are making an assumption here that there is a stripe width of 1 and 3 components for each Virtual Machine.

                            1138 B x 8  x 1200 / 5 = 2.18Mbps

There is one final calculation to be made here. As VMware recommends loading this with an additional 10% buffer our total requirement will be 2.4Mbps

In summary our Stretched cluster solution was generating the following for the data to data site.
  • Virtual Servers: 200
  • Stripe width: 1
  • Components Per VM: 3
  • Total IOPS: 100,000
  • Read/Write Ratio: 70/30%
  • Block size: 8K
  • Total bandwidth requirement for data to data site: 3.6Gbps.
  • Total bandwidth requirement for data to witness site: 2.4Mbps
There are several references to buffers here and these can be called into action when an administrator makes a change to a policy or performs VM creation operations. As the data requirement between the data site and witness is so low there is no need for a special link or high performance link and more and more customers are tending to host the witness from a cloud service or a branch office with minimal infrastructure in place. The Data site however is critical to get right. 5 ms round trip latency is the max supported so really a distance of less than 100KM is required and preferably with QOS. This link is supported on both Layer 2 and Layer 3 links.

So that's it, that's my post on getting your bandwidth sizing right. If you want a breakdown of the formula parameters they are listed below.

Note: All parameters used are explained below.
  • B =   Wb *md * mr
  • B =   Bandwidth
  • Wb =Write Bandwidth
  • md = Data Multiplier
  • mr =  resynchronization multiplier
VMware recommends a Data Multiplier of 1.4 however we will use a multiplier of 1.5 to keep this nice and simple. I guess your asking now why the re synchronization multiplier is included? this is here to factor events such as failure states or object policy changes that can invoke a resync. it really is key to understand that all changes in a vSAN environment will amplify back-end operations which need to be factored in. Also the resync traffic should be set at 25% minimum

If you found this post useful can you please share!








Monday, 22 October 2018

Building a software defined datacenter with VMware Cloud Foundation - Part 3


Finally getting to part 3 of this series on VCF3.0. There is some information i cannot share regarding this release but if you are attending VMworld in Barcelona next month watch out for some really cool updates surrounding this VCF release. The first thing is the new look UI which is customizable but the default views are pretty comprehensive.



On this UI there is the addition of a button to "Commission Hosts", the prep work from part 1 of this series covers the prerequisites here.



Choose "Commission Hosts" and you will get the below checklist.


The big ommision in this release is on the workload domain creation. the option to create a "VDI Workload" domain, not sure why this as been dropped but documentation just states that you should create a "VI workload domain" and add the View components manually.
SO on the UI now all options are menu driven from the left side of screen as can be seen below. without going through each one of these i have picked out the interesting options.


Under "Inventory" we get a full breakdown of all domains created including management domains and all resources consumed on the cluster.


 There is also a hosts only view for CPU,MEM & Network view of each resource, please see below:



Under the "Repo's" tab you have the option to add your VMware Account Credentials and all updates will be pulled directly to the SDDC manager. the Pull is not version considerate so you will expect to see bundles for all versions of VCF including versions for solutions such as Dell EMC VXRack.



Finally in this post lets take a look at the administration tab. you have the usual options here such as security, account management, licensing etc




So lets look at the vRealize Suite view for now. when you choose this option you have the ability to deploy

          • vRealize Log-insight
  • vRealize Operations
  • vRealize Automation

the combination of both Log-Insight and vROPS offers all of the monitoring and remediation capabilities you require for day to day management of your environment.

vRealize Automation is certainly worth a series of posts as its a large , powerful and complex product v7.5 has just been released so it will be interesting to review this alone. the good thing here is there is not requirement to deploy all 3 together, simply move at your own pace and deploy the product you are ready to use and then adopt a phased approach to roll out. My recommendation here is get the ops piece in place and then move on to the automation components.

These wizards allow you easily deploy the full suite.


so that is it, short and sweet. the next post will look at deployment of the vRealize Suite with Operations as the first post. This is a product that has gotten a major overhaul and has a road-map clearly aligned with vSAN. FYI here if you are not aware of it, vROPS dashboard for vSAN are included in your vSAN license now. if you are not already using them please take a look as they contain tonnes of great information relevant to your clusters.

thats it for now. thanks for reading and if you liked this post please share.





Saturday, 13 October 2018

Building a software defined datacenter with VMware Cloud Foundation - Part 2

When you deploy the Cloud Builder VM , browse the below address and login to the appliance
https://Cloud_Builder_VM_IP:8008.



Enter your Credentials, accept the agreement and choose download parameters worksheet.



Once the document is completed its time to generate the json file. to do this you will need a tool like WINSCP to allow you transfer your .XLS template to the appliance. Once this is complete issue the following before copying the newly generated .JSON file back to your deployment VM.

Open a putty session and connect to the Cloud Builder VM


  • sudo cp /home/admin/xlsx_file /opt/vmware/sddc-support/cloud_admin_tools/JsonGenerator

  • cd /opt/vmware/sddc-support/cloud_admin_tools/JsonGenerator/


  • sudo python JsonGenerator.pyc -i /opt/vmware/sddc-support/cloud_admin_tools/JsonGenerator/xlsx_file -d vcf-ems 


  • sudo mv /opt/vmware/sddc-support/cloud_admin_tools/Resources/vcf-ems/vcf-ems.json /home/admin/


  • sudo chown admin:users /home/admin/vcf-ems.json

Using a file transfer utility, download the /home/admin/vcf-ems.json file to the machine you are building from.

Now, we need to log back into the Cloud Builder and upload the .JSON and choose "Validate"
Assuming all of your parameters are correct the valdiation will pass otherwise its Troubleshooting Time!!!!!!!!!
Next step is to begin the bring up service.


BOOOOYAH! you now have a fully operational death star, i mean software defined datacenter. in the next installment we will login and review the new look SDDC Manager UI.

Friday, 12 October 2018

Building a Software Defined Datacenter with VMware Cloud Foundation - Part 1


I posted recently about some of the updates to VCF but this speak i built a management workload domain on a 4 node vSAN Ready solution. I used the Dell R640 All Flash RN's for this task along with the Dell EMC S4048 ToR switches.


So to kick this off what do you need.

  • 4 x Ready Node Servers for the mgmt domain, this is the minimum however more nodes are recommended for performance.
  • 2 x ToR Switches
  • 1 x MGMT Switch.


Checkpoint: At this point with VCF 2.3.X you would be removing the config from these switches and running them in ONIE mode to get imaged by a VIA Appliance. No longer! now you can BYON and configure them any way you would like.

The switch prerequisites are straight forward. you need the following vLANs and take note, if you want the validitation process to pass without issue TAG the mgmt also. So create the following:

  • Management vLAN
  • vMotion vLAN
  • vSAN vLAN
  • VXLAN

All of this traffic will carried over 2 x 10GB Uplinks. If you would like to have an iDRAC for OOB Mgmt thats fine also.



Next step is to get an ESXi 6.5 U2 image on the hosts. I know what your thinking, i could not be bothered doing this. the good thing here is that you can take the custom build of 6.5 relevant to the node manufacturer. this means you will have all of the necessary drivers and bios etc. for my build i had a Boss Card to install the hypervisor on. once imaged apply the desired mgmt ip address and DNS details.

Yes! DNS, you need AD, Bind or another form of DNS to validate. Ensure you have reverse lookup configured and tested in advance, in addition here you will need NTP, the hosts, Cloud builder and NTP server need to be within 30 seconds of each other.

Finally enable SSH access and your prep is complete. nice nad easy to this point.
Next using a deployment host download and deploy the VCF Cloud Builder VM. this has everything included, all you need to do is ensure you have your License keys ready.

Lets park it here for now and we will complete the process in the next post.

Monday, 8 October 2018

VMware Cloud Foundation 3.0 just got a facelift


 


VMware Cloud Foundation is built on vSAN, NSX and an SDDC Manager. If you were at VMworld in Las Vegas you now know that version 3.0 has been released. Version 2.3.2 is still the current shipping version for VMware partners that are building engineered solutions such as the Dell EMC VXRack. the 2.3.2 build is extremely prescriptive and on hardware in particular networking where only in this release has support been added for some vendors switching for ToR use. Dell EMC Switching, Cisco and Arista being the most popular choices. last month i built this version on ready nodes and it was challenging when you are taking the DIY approach which really demonstrates the value in an engineered solution. now i have flattened this environment and am building VCF 3.0 and it is a breath of fresh air.





So lets take a look at some of the key enhancements and changes in VCF3.0 over the 2.x editions.

1. BYON (Bring your own Networking)
now you can use existing or your own choice of vendor for switches once you adhere to the prep document provided by VMware.

https://docs.vmware.com/en/VMware-Cloud-Foundation/3.0/vcf-30-planprep-guide.pdf

2. VIA is no longer required for the imaging process. now you can install the Hypervisor yourself, the recommend versioned here is ESXi6.5 U2. VIA has been replaced now with the Cloud Builder virtual appliance. you simply configure your json file with your network parameters, vlans and credentials etc.. this new appliance is Photon based and includes a thorough validation tool inbuilt. there is quite a bit of prep work to be done ahead of the bring up however its a good blend of automation and control over the environment. i will do a deployment blog for a management domain in the coming weeks.
3. The SDDC Manager has been totally overhauled with a feature rich UI for managing the health and performance along with the workload domain creation wizards. VDI is a great use case here. bring your own golden image and you can deploy an entire Horizon environment on a vSAN based workload domain in just a few clicks.
4. vSAN Ready is the VMware approach and now the entire vSAN HCL is supported with VCF.
5. Another cool feature to be added is vSAN stretch cluster dual availability zones. this was a major limitation of the 2.x release. the usual requirements apply to configuration parameters like RTL of sub ms. etc....
There is a lot more but this is it for now, i am building a 3.0 environment at present will configure all new use cases including some sweet capability being added to Workload domains. watch this space!

A few useful links.

https://docs.vmware.com/en/VMware-Cloud-Foundation/3.0/rn/VMware-Cloud-Foundation-30-Release-Notes.html

https://docs.vmware.com/en/VMware-Cloud-Foundation/3.0/com.vmware.vcf.ovdeploy.doc_30/GUID-F2DCF1B2-4EF6-444E-80BA-8F529A6D0725.html

https://blogs.vmware.com/cloud-foundation/2018/09/19/vmware-cloud-foundation-3-0-architecture-poster/